🌎 Based in Argentina | 💼 Open to Remote & Freelance Roles | 🚀 Focused on ML, Analytics & Cloud
I'm a Data Scientist with a background in Industrial Engineering and expertise in cloud computing, automation, and data analytics. I combine analytical thinking with practical skills to build solutions that help businesses unlock insights, save time, and scale sustainably.
After 4 years in engineering, I transitioned into tech by completing the EPAM DevOps Bootcamp and a comprehensive Data Science with Python program, mastering:
- 📊 Statistics & Machine Learning
- 🧪 SQL for data analysis
- ⚙️ Business automation with Python
- ☁️ Cloud infrastructure (AWS: S3, EC2, Lambda, API Gateway)
I enjoy solving real-world problems by cleaning data, building dashboards, automating workflows, and deploying cloud solutions — always with empathy, clarity, and impact.
Category | Skills & Tools |
---|---|
Machine Learning | scikit-learn, TensorFlow, PyTorch, XGBoost, BERT, Neural Networks |
AWS Services | SageMaker, EMR, S3, Lambda, Glue, Redshift, QuickSight |
Programming | Python, SQL, PySpark, Docker, Git |
Data Science | Statistical Analysis, Feature Engineering, A/B Testing, Time Series Analysis |
Certifications | AWS Solutions Architect Professional, AWS Machine Learning Specialty |
- Clean, transform, and analyze raw data
- Build interactive dashboards to support business decisions
- Automate manual tasks (Excel, reports, scraping, etc.)
- Design and deploy ML models using Python and AWS
- Set up cloud resources on AWS for scalable projects
- Write documentation and train non-technical teams
Developed and deployed an ML model to predict customer churn for a telecommunications company.
Business Problem: High churn rate causing significant revenue loss.
Implementation: Built an XGBoost model using AWS SageMaker, implemented a real-time inference endpoint, and created an API with AWS Lambda and API Gateway.
Results: Achieved 92% prediction accuracy, reduced churn by 24%, saving $2M annually.
Tech: AWS SageMaker, Python, scikit-learn, XGBoost, AWS Lambda, API Gateway
🔗 GitHub Repo | 🌐 Live Demo
Designed and implemented a scalable data pipeline for processing customer transaction data.
Business Problem: Inefficient processing and analysis of large-scale transaction data.
Implementation: Created ETL pipeline using AWS Glue, implemented data quality checks, and built a Redshift data warehouse.
Results: Reduced data processing time by 70%, enabled real-time analytics.
Tech: AWS S3, AWS Glue, Amazon Redshift, Apache Spark, AWS Lambda
🔗 GitHub Repo
Created an interactive dashboard for sales performance visualization.
Business Problem: Lack of real-time visibility into sales metrics.
Implementation: Developed Amazon QuickSight dashboard with drill-down capabilities and automated data refresh.
Results: Improved decision-making speed by 50%, increased sales team efficiency.
Tech: Amazon QuickSight, AWS Athena, S3, Python, Pandas
🔗 GitHub Repo
Built predictive analytics solution for inventory management.
Business Problem: Suboptimal inventory levels causing stockouts and excess inventory.
Implementation: Implemented Prophet forecasting model with automated retraining and real-time predictions using AWS Lambda and DynamoDB.
Results: Reduced inventory costs by 30%, improved forecast accuracy by 40%.
Tech: Python, Prophet, AWS Lambda, DynamoDB, CloudWatch
🔗 GitHub Repo
Processed and analyzed large-scale application logs for operational insights.
Business Problem: Difficulty analyzing massive log data and deriving insights.
Implementation: Built AWS EMR cluster for log processing, implemented real-time indexing with Elasticsearch and Kibana.
Results: Reduced Mean Time to Recovery (MTTR) by 60%, enabled proactive issue detection.
Tech: AWS EMR, Apache Spark, Elasticsearch, Kibana, S3
🔗 GitHub Repo
Created end-to-end ML pipeline for social media sentiment analysis.
Business Problem: Manual and error-prone ML deployment processes.
Implementation: Automated ML pipeline with continuous training and deployment using AWS SageMaker, Step Functions, and CodePipeline.
Results: Reduced model deployment time by 80%, improved accuracy by 15%.
Tech: AWS SageMaker, Step Functions, CodePipeline, Docker, BERT
🔗 GitHub Repo | 🌐 Live Demo
- 📧 rociomnbaigorria@gmail.com
- 🕒 Time zone: GMT-3 (Argentina)
- Freelance Data Science or Analytics Projects
- Remote Python Automation & ETL
- AWS Cloud Setup for MVPs or startups
- Entry-Level DevOps / MLOps collaboration
- Data strategy support for small teams
#DataScience
#PythonAutomation
#AWSforData
#PowerBI
#RemoteWork
#FreelanceAnalytics
#CloudFirst
#WomenInData
#WomenInCloud
#AzureforData