Senior Data Scientist
![]() | |
![]() United States, Washington, Redmond | |
![]() | |
OverviewMicrosoft Cloud Operations + Innovation (CO+I) is the engine that powers Microsoft's cloud services. Our team is dedicated to delivering high-quality infrastructure to support cloud operations. As Microsoft's cloud business continues to mature, our infrastructure expansion accelerates-with Data Centers at the core of this growth. To support this momentum, we are scaling the acquisition and development of our owned, designed, and constructed Data Center facilities. In parallel, we continue to lease and acquire Data Center capacity at pace, especially in high-growth markets. This involves close collaboration with Data Center operators across regions and around the globe. We are seeking a skilled Senior Data Scientist to join our CO+I Lease and Land Development Digital Transformation team. This role is ideal for someone who thrives in a fast-paced environment, enjoys solving complex problems, and is passionate about using data to influence product strategy and enhance customer experience. You will collaborate with cross-functional teams-including PMs, engineers, and business stakeholders-to deliver AI and Agentic AI solutions, actionable insights, and scalable data systems. Microsoft's mission is to empower every person and every organization on the planet to achieve more. As employees, we embrace a growth mindset, innovate to empower others, and collaborate to achieve shared goals. Every day, we build on our values of respect, integrity, and accountability to foster a culture of inclusion where everyone can thrive-at work and beyond. In alignment with our Microsoft values, we are committed to cultivating an inclusive work environment that positively impacts our culture every day.
ResponsibilitiesApply modification techniques to transform raw data into compatible formats for downstream systems. Utilize software and computing tools to ensure data quality and completeness. Implement code to extract and validate raw data from upstream sources, ensuring accuracy and reliability.Writes efficient, readable, extensible code from scratch that spans multiple features/solutions. Develops technical expertise in proper modeling, coding, and/or debugging techniques such as locating, isolating, and resolving errors and/or defects. Leverages technical proficiency of big-data software engineering concepts, such as Hadoop Ecosystem, Apache Spark, continuous integration and continuous delivery (CI/CD), Docker, Delta Lake, MLflow, AML, and representational state transfer (REST) application programming interface (API) consumption/developmentAcquires data necessary for successful completion of the project plan. Proactively detects changes and communicates to senior leaders. Develops usable data sets for modeling purposes. Contributes to ethics and privacy policies related to collecting and preparing data by providing updates and suggestions around internal best practices. Contributes to data integrity/cleanliness conversations with customersAdhere to data modeling and handling procedures to maintain compliance with laws and policies. Document data type, classifications, and lineage to ensure traceability and govern data accessibility.Perform root cause analysis to identify and resolve anomalies. Implement performance monitoring protocols and build visualizations to monitor data quality and pipeline health. Support and monitor data platforms to ensure optimal performance and compliance with service level agreements.Knowledge and implementation of an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.Leverages knowledge of machine learning solutions (e.g., classification, regression, clustering, forecasting, NLP, image recognition, etc.) and individual algorithms (e.g., linear and logistic regression, k-means, gradient boosting, autoregressive integrated moving average [ARIMA], recurrent neutral networks [RNN], long short-term memory [LSTM] networks) to identify the best approach to complete objectives. Understands modeling techniques (e.g., dimensionality reduction, cross validation, regularization, encoding, assembling, activation functions) and selects the correct approach to prepare data, train and optimize the model, and evaluate the output for statistical and business significance. Understands the risks of data leakage, the bias/variance tradeoff, methodological limitations, etc. Writes all necessary scripts in the appropriate language: T-SQL, U-SQL, KQL, Python, R, etc. Constructs hypotheses, designs controlled experiments, analyzes results using statistical tests, and communicates findings to business stakeholders. Effectively communicates with diverse audiences on data quality issues and initiatives. Understands operational considerations of model deployment, such as performance, scalability, monitoring, maintenance, integration into engineering production system, stability. Develops operational models that run at scale through partnership with data engineering teams. Coaches less experienced engineers on data analysis and modeling best practices. Develops a strong understanding of the Microsoft toolset in artificial intelligence (AI) and machine learning (ML) (e.g., Azure Machine Learning, Azure Cognitive Services, Azure Databricks). Design and Implement Dashboards: Develop user-friendly dashboards for various applications, such as Supplier Spend Analytics, Supplier Scorecards, Incident and Service Level Agreement (SLA) Compliance Monitoring, Spares and Inventory Management, and other business-facing applications. |