Badges
Certifications
Work Experience
Data Engineer
Au Carrefour•  December 2022 - June 2024•  Remote
Dedicated Data Engineer with a focus on Google Cloud Platform (GCP) solutions, collaborating with Carrefour in an international team environment. Experienced in leveraging GCP tools like Airflow, BigQuery, and Storage to develop robust data pipelines. Proficient in Agile methodologies including Scrum and Kanban, and skilled in using Jira and Confluence for project management. Strong in scripting with Shell and programming with Python and SQL to deliver effective ETL solutions. Responsibilities: • GCP Data Solutions: Designed and implemented data solutions on GCP using tools such as Airflow for workflow orchestration, BigQuery for data warehousing, and GCP Storage for scalable storage solutions. • Client Collaboration: Worked closely with Carrefour stakeholders in an international setting to understand requirements and deliver tailored data engineering solutions. • Agile Methodologies: Practiced Agile methodologies including Scrum and Kanban to ensure efficient project delivery and collaboration within cross-functional teams. • Project Management: Utilized Jira and Confluence for project tracking, task management, and documentation. • Compute Engine and Shell Scripting: Leveraged GCP Compute Engine for computing resources and employed Shell scripting for automation and system management tasks. • Programming: Developed and optimized ETL processes using Python and SQL, ensuring reliable data extraction, transformation, and loading.
Data Engineer
act digital•  April 2022 - November 2022•  Remote
Focused and skilled Data Engineer with expertise in transcribing Python code to Databricks and developing ETL processes. Proficient in using Azure technologies, SQL Server, and various data processing tools. Experienced in managing data workflows and performing data analysis to support business decisions. • Python to Databricks Transcription: Transcribed and optimized Python code for execution within Databricks, ensuring seamless integration and efficient data processing. • ETL Development: Developed and analyzed ETL processes to automate data workflows, leveraging technologies such as Azure, SQL Server, and Azure Synapse. • Azure Integration: Utilized Azure services, including Azure DevOps, Event Grid, and Azure Synapse, for comprehensive data solution development and management. • Database Management: Managed and optimized databases using SQL Server and Sybase, ensuring high performance and reliability. • Data Analysis and Processing: Conducted data analysis and processing using Python, SAS, and Databricks, providing actionable insights for data-driven decision-making.
Data Engineer
ALLKANCE TECHNOLOGY•  October 2021 - June 2022•  Remote
Experienced Data Engineer with a strong background in creating ETL processes and analyzing complex databases. Proficient in PL/SQL, Python, Hadoop, and Spark, with extensive experience in query development and tuning. Skilled in using Control-M for scheduling, monitoring, and managing batch jobs. • ETL: Developed ETL processes using PL/SQL and Python on an on-premise server with Hadoop and Spark, ensuring efficient data extraction, transformation, and loading. • Database Analysis: Analyzed complex Oracle and SAP databases to extract insights and support data-driven decision-making. • SQL Development and Tuning: Created and optimized complex queries to enhance data retrieval and processing efficiency. • Control-M: Utilized Control-M to schedule, monitor, and manage batch jobs, ensuring timely and efficient execution of data processing tasks.
Data Engineer
NTT Data•  June 2021 - October 2021•  Remote
Highly skilled AWS Data Engineer with extensive experience in developing and maintaining data pipelines. Proficient in using AWS tools and programming languages to manage, process, and analyze large datasets. Adept at creating and optimizing SQL queries, following agile methodologies, and collaborating effectively with teams. • Data Pipeline Development and Maintenance: Developed and maintained data pipelines for processing, automation, and data treatment using S3, Lambda, Glue, Athena, CodePipeline, and Git. • Database Analysis: Analyzed complex and large-scale databases to derive actionable insights and ensure data integrity. • Tuning: Created and improved SQL queries to enhance data retrieval and processing efficiency. • Agile Methodology: Utilized agile methodologies to optimize delivery times and ensure high-quality project outcomes. • Team Collaboration and Communication: Worked effectively in team environments, demonstrating excellent communication skills and contributing to collaborative projects.
Data Engineer
PROJETAS SOLUES INTEGRADAS•  June 2020 - July 2021•  Remote
Experienced Data Engineer specialized in SQL Server support and ETL tools. Proficient in PySpark, SQL, AWS, and Athena, with extensive experience in JDBC connections, Glue, Lambda and S3 bucket management. Skilled in file manipulation, data treatment, processing, and the creation and maintenance of ETL pipelines for Big Data environments. • Database Support and Tools: Responsible for supporting SQL Server, SSISDB, Tableau, NiFi, and job agent. Utilized tools such as MSSQL, DBeaver, and Athena for data management and analysis. • Programming and Cloud Technologies: Advanced programming in PySpark and SQL. Experience with AWS services, including Athena, and integrations via JDBC connections, Glue, Lambda and S3 buckets. • Data Manipulation and Processing: Expertise in file manipulation and data treatment. Efficient in processing and transforming large volumes of data. • ETL Creation and Maintenance: Developed and maintained ETL processes to integrate data from various sources such as Teradata, SQL Server, Salesforce, and SAP into Big Data environments using Hadoop, Hive, Sqoop, MapReduce, and Python with PySpark. • DevOps Competencies: Product development, data pipelines, CI/CD and version control using Git. Technical Skills: • Programming Languages: PySpark, SQL, Python and T-SQL • Platforms and Tools: AWS (Athena, S3), Hadoop, Hive, Sqoop, MapReduce, PowerCenter, DbVisualizer, Tableau, NiFi • Database Tools: SQL Server, SSISDB, MSSQL, DBeaver, Athena
Data scientist
TECNISYS•  April 2019 - June 2020•  Goiânia, Brazil
Experienced Data Scientist with a strong background in analyzing large datasets to extract valuable insights and support strategic decision-making. Skilled in predictive modeling, data mining, data visualization, and effectively communicating findings to stakeholders. • Data Analysis: Collecting, cleaning, and analyzing large datasets from diverse sources to identify trends, patterns, and anomalies, driving data-driven decisions. • Predictive Modeling: Developing and implementing statistical and machine learning models to forecast outcomes and enhance business strategies. • Data Visualization: Creating interactive dashboards and visual reports using tools like Pentaho, Power BI, and Python libraries (matplotlib, seaborn) to make data accessible and understandable for non-technical stakeholders. • Data Mining: Applying data mining techniques to discover hidden information and gain actionable insights, improving processes and strategies.
SQL Server Database Administrator
Indra Company•  June 2016 - April 2019•  Goiânia, Brazil
As a SQL Server Database Administrator at Indra Company, I was responsible for ensuring the performance, availability, and security of the organization’s SQL Server databases. My key responsibilities included: • Database Maintenance and Monitoring: Implemented proactive monitoring strategies to ensure optimal performance and availability of SQL Server databases. Conducted regular maintenance tasks such as index rebuilds, database backups, and integrity checks. • Performance Tuning: Identified and resolved performance bottlenecks by optimizing SQL queries, indexes, and database configurations. Utilized tools and techniques to enhance query performance and overall database efficiency. • Database Design and Implementation: Collaborated with development teams to design and implement database schemas, tables, stored procedures, and functions. Ensured database designs aligned with application requirements and scalability needs. • ETL Development: Built ETL complexes for several target systems using Python, Spark and SQL to read the source, which could be files or databases.
Education
Universidade Federal de Goiás
Computer Science, BS•  August 2010 - December 2016
Links
shfp12 has not updated links details yet.
Skills
shfp12 has not updated skills details yet.