Machine Learning Engineer

Machine Learning Engineer 

About the job

Job Profile Summary
The Machine Learning Engineer will be responsible for implementing and maintaining data science models in bpx’s machine learning studio. The role will be a subject matter expert in machine learning operations and the associated technology. This role will bridge the gap between pure data science and the computational requirements needed to meet business outcomes. The role will have the ability to guide bpx’s data science journey in a nascent technology stack. The ML Engineer will have significant freedom and latitude to suggest and implement solutions.

Job Advert
Key Accountabilities:
 

  • Implement data science algorithms in bpx’s ML Studio (SageMaker)
  • Create systems and processes to monitor performance of ML algorithms in production
  • Serve as the subject matter expert of ML Operations and guides data scientists in the practical implications of model design
  • Collaborate with the data engineering team to build and maintain data pipelines from systems like Snowflake and OSI Pi
  • Partner with bpx Architecture team to ensure endpoints, compute, and network considerations are built into solutions
  • Takes initiative and stays up to date with the latest data science trends, techniques, and best practices, determining how to incorporate the most suitable practices in the department
  • Work as part of geographically dispersed team, effectively communicating prioritized business needs and prioritized project statuses
  • Design systems to balance cost and performance to meet business outcomes
     

Essential Education:
 

  • A Bachelor’s degree (Master’s preferred) in Statistics, Mathematics, Computer Science, or any other related quantitative field
     

Essential Experience and Job Requirements:
 

  • 7+ years in data science or related field, 3+ years of hands on experience in machine learning operations
  • Proven track record of implementing and scaling models in an operations or customers focused company
  • Strong programming skills: Python and Cloud Implementation Scripting
  • Experience with big data, real-time streaming data technologies, and cluster computing environments
  • Knowledge and exposure to cloud technologies, especially AWS
     

We offer a reward and wellbeing package to enable your work to fit with your life. These can include, but not limited to, access to health, vision and dental insurance, flexible working schedule, paid time off policy, discretionary annual bonus program, long-term incentive program, and a generous 401K matching program.?How much do we pay (Base)? $106,000-$160,000

  • Note that the pay range listed for this position is a good faith and reasonable estimate of the range of possible base compensation at the time of posting.?
     

Entity
Production & Operations

Job Family Group
IT&S Group

Relocation available
No

Travel Required
Negligible travel

Country
United States of America

About BP
PRODUCTION & OPERATIONS
This is the place to truly drive change. Our people develop hydrocarbon resources, deliver projects, operate refineries as well as oil and gas production assets.

Join us and make a difference by:
 

  • making our production and operations safer and more standardised
  • driving quicker reduction of our carbon emissions
  • growing cash returns and delivering improved reliability and optimisation
  • maximising efficiency through sharing resources
  • accelerating the digital transformation of our operating assets
  • developing our people faster, leveraging the scale of P&O
  • building greater integration and collaboration in service of our purpose
     

Experience Level
Intermediate

Data Engineer

Headquartered in Denver, Colorado, our bp/bpx energy business operates across a vast area within our Upstream market, from the north of Texas to the Rocky Mountains. Here, we manage a diverse portfolio, and we are pushing ahead with reducing emissions through the introduction of drone-mounted sensors ‘green completion’ well technology. The ‘X’ in the name stands for exploration, which means finding new resources and new ideas and ways to improve what we do.
As part of our bpx energy business, you will be part of a world-class team and have a real chance to reach your career goals. You will grow your skills through bp’s wide-reaching network. And keep advancing while reaching your biggest ambitions. Join us and we will move the industry forward together.?  bpx energy is a wholly owned subsidiary of bp.
As part of the Data Platforms squad, the Sr. Data Engineer position contributes to the overall BPX Data Platform Strategy. 
Key accountabilities

  • Define data workflow, pipelines, security guidelines, policies, and procedures
  • Document the guidelines, policies, and procedures on the Digital Hub for Squad members to reference
  • Engage in Pilot and Proof of Concepts as new features are released, or new data integration/ingestion use cases arise from Squad members
  • Provide oversight to ensure that Squad members are following the approved data workflow, pipelines, and security guidelines
  • Provide Data Engineering certification approval for Squad members
  • Evaluate, compare, and recommend new Data Platform enhancements, and tools, and existing Data Platform new features
  • Partner and provide guidance to Squads on technical direction and approved integration patterns
  • Present modifications to Data Platforms to the Architecture Review Board, for approval
  • Ensure stable Data Platform infrastructure, and solutions
  • Comfortable operating as an individual contributor and using influence and expertise to aid in the transformation of the organization
  • Knowledgeable of industry trends and best practices and staying current on new technology as it comes to market
  • Participation on the Data Platform Guild

Essential experience and job requirements

  • 5-7+ years of Data Engineering experience
  • 5-7+ years of relevant work experience in IT/Data and/or Analytics space
  • Experience with cloud platforms, AWS and Azure preferred
  • Experience in any cloud data warehouse, Snowflake preferred
  • Experience with replication tools, Fivetran/HVR preferred
  • Experience with transformation tools, dbt and ADF preferred
  • Experience with programming tools, python preferred
  • Experience with REST APIs for data ingestion
  • Strong understanding of ETL/ELT processing with large data stores
  • Experience designing and delivering large scale, 24-7, mission-critical data pipelines and features using modern big data architectures
  • Stream processing services for pub/sub models such as Kafka, AWS Kinesis, Apache Storm, Spark Streaming, Azure Event Grid, AWS Event Bridge etc.
  • Demonstrated experience working in large-scale data environments which includes continuous and batch processing
  • Experience working with JSON/Parquet formats, in Snowflake preferred
  • Strong data modeling skills (relational, dimensional, and flattened)
  • Strong analytical and SQL skills, with attention to detail
  • Ability to aid in tuning and performance recommendations for poor performing SQL queries and/or python scripts
  • Knowledge of Database Administration tasks – Indexing, SQL Tuning/Performance, Backup/Recovery, DR
  • Experience with CI/CD pipeline management/configuration
  • Experience with Azure DevOps repositories, git experience acceptable
  • Ability to work with multiple external teams and accomplish shared goals by building consensus
  • Knowledge of Data Virtualization tools, preferred Denodo
  • Strong communication (written/verbal) and collaboration skills
  • Consulting, negotiation, and relationship skills
  • Problem solving skills
  • Enthusiastic, high-energy individual, self-motivated, people-oriented, and self-directed
  • Must be an intelligent, articulate, and persuasive leader who can serve as an effective member of the team, who can communicate concepts to technical & nontechnical colleagues. 
  • Must be able to maintain focus on achieving results, whilst being patient and pragmatic

 
Desirable criteria & qualifications

  • Knowledge of Snowflake configuration, administration, and setup best practices (security, external stores, cloning, etc)
  • Desire to continually learn outside of a classroom environment, and successfully apply learnings
  • Demonstrated willingness to both teach others and learn new techniques
  •  

Travel required
Yes – up to 5%
Employment Type
Full-time: Denver, CO or Houston, TX based