Data Ops Engineer Job at ScriptString
Your Purpose
The Data Ops Engineer is responsible for the day-to-day technical development and delivery of data pipelines for the ScriptString.AI platform. You will ensure the delivery of solutions based on the backbone of good architecture, and best data engineering practices around operational efficiencies, security, reliability, performance, and cost optimization.
The key focus of the role is:
- Support DevOps with BAU activities when required.
- Support the design, build, and optimization of data engineering pipelines to extract data from different sources and applications to the ScriptString.AI platform.
- Work closely with customers during the onboarding process and assist with building and testing of data extraction, data transformation, and reporting deliverables within the ScriptString.AI platform.
To be successful in this role, we are seeking an experienced Data Ops Engineer with demonstrated knowledge and experience in data delivery, ETL/ELT, data management, solution design, and delivery. Critical to success will be strong communication skills, an analytical mindset, excellent documentation skills, and demonstrable experience in working with a variety of business stakeholders.
Experience
Essential
- 2+ years of experience working within AWS cloud platform.
- 2+ years’ development experience with cloud-based technologies within AWS.
- Hands-on experience in building ETL/ELT solutions for large-scale data pipelines within AWS cloud
- 2+ years of hands-on experience in data processing (using Python) for cloud data platforms, scheduling and monitoring of ETL/ELT jobs (e.g. AWS Glue and Lambda).
- 2+ years working with Python, Javascript Backends (e.g. Node) and tools like Pytorch.
- Experience with solution architecture, data ingestion, query optimization, data segregation, ETL, ELT, CI/CD framework.
- Experience with complex query authoring as well as a variety of SQL and ETL/ELT software (e.g. AWS Glue).
- Experience in data analysis, relational and dimensional data modeling, data integration, data warehousing, OLTP, OLAP, and database/schema design.
- Experience in handling data from a variety of sources like csv, json, xml, relational databases, and Amazon S3.
- Experience in developing technical and support documentation, translating business requirements and needs into reporting and models.
Desirable but not essential
- 1+ years' experience with data visualization tools.
- 1+ years’ experience working on Agile delivery frameworks including Scrum, Kanban, and Scrumban
Your Responsibilities
Design, Development, and Support
- Design and development of data engineering assets and scalable engineering frameworks to support customer data demands.
- Code, test, and document new or modified data models and ETL/ELT tools to create robust and scalable data assets
- Expand and increase data platform capabilities to resolve new data problems and challenges by identifying, sourcing, and integrating new data
- Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
- Peer review code and promote dev ops culture within the data team.
- Develop ETL/ELT solutions using Amazon S3, Data Migration Services (DMS), AWS Glue, Lambda, Spark, Python to load data across multiple sources into the Peoplecare Cloud Data platform within Snowflake.
- Implement solutions that adhere to architecture best practices
- Contribute to our ambition to develop a best-practice Data and Analytics platform, leveraging next-generation cloud technologies.
- Define and build the data pipelines that will enable faster, better, data-informed decision-making within the business
- Ensure data integrity within reports and dashboards by reviewing data, identifying and resolving gaps and inconsistencies, and escalating as required to foster a partnered approach to data accuracy for business reporting purposes.
- Work on cross-functional solutions focussing on business and process improvement.
Data Platform Administration
- Maintain platform performing regular tasks such as user management and audit, resource utilization, alerts monitoring, and code reviews.
- Modify existing ETL processes to achieve automation where possible and to accommodate changes in the data structure.
- Have a good understanding of data platform best practices to achieve economies of scale, cost reduction, and efficiencies.
Skills
- Strong business analysis skills with an ability to understand complex business processes
- Well-developed organizational and time management abilities
- Excellent communication and presentation skills (verbal & written)
- Excellent interpersonal skills
- Technical and Analytical mindset
Job Type: Full-time
Salary: $50,000.00-$65,000.00 per year
Benefits:
- Flexible schedule
- Work from home
Schedule:
- 8 hour shift
Experience:
- Working within AWS cloud platform: 1 year (required)
- Building ETL/ELT solutions for large-scale data pipelines: 1 year (required)
- Working with Python: 1 year (required)
Work Location: Remote
Please Note :
teebmexico.org is the go-to platform for job seekers looking for the best job postings from around the web. With a focus on quality, the platform guarantees that all job postings are from reliable sources and are up-to-date. It also offers a variety of tools to help users find the perfect job for them, such as searching by location and filtering by industry. Furthermore, teebmexico.org provides helpful resources like resume tips and career advice to give job seekers an edge in their search. With its commitment to quality and user-friendliness, Site.com is the ideal place to find your next job.