WDC's Quality Organization is seeking a Data Engineer/Scientist with at least 2-5 years experience in computer science, engineering or mathematics/statistics to join their data analytics team. Team members’ work as part of a global engineering team responsible for the reliability of WDC (HDD/SSD) products as they are integrated into our customers' product environments. Our team in particular works to assure optimal reliability of our drives for each of the many customer environments. In that capacity deep knowledge of coding techniques and machine learning and visualization protocols are needed to expand our analytic compatibility. The scope of this position includes data mining, the UI and backend data manipulation to create tools for product characterization. These tools will deploy in both our internal engineering data systems as well as customer environments. The associated analysis and ongoing feature improvements are required to make the most reliable product possible. This effort will require the creation of the machine learning models for the drive data to enable use by product designers and customer engineers. They will use the data to understand the HDD performance and reliability. This position requires working with an international team of engineers that are responsible for measuring and documenting performance results and problem resolution.
Experienced or New College Hire to join The DATA Analytics and Reliability Engineering department (Major: Data Scientist)
Qualified candidates should have a MS or Phd degree in Computer Engineering, Computer Science or Engineering from an accredited university.
Candidates should possess or be comfortable using
Matlab, programming and scripting in C/C++ with SQL and NoSQL database experience and a basic knowledge of statistics.
Experience with Linux is a requirement. Experience with big data platforms,
Professional business communication competency, both verbal and written, is required, as is competency with basic office technology (general PC use, email, spreadsheets, etc.).
Interest in extensive hands-on computer coding is expected.
• Strong background in Java and Python
• Experience with SPSS
• Knowledge of Linux OS is a must.
• Technical experience in big data computing platforms such as Hadoop and Hive.
• Knowledge of SQL databases is required.
• Knowledge of cloud technologies is highly preferred
• Experience with web application development
• Experience in Spotfire, R-Studio is preferred.
• Background in Math, statistics and Machine Learning methodologies is preferred.
• Candidate must have good communion skills, both verbally and by written.
Additional tasks may include:
• Establish efficient ways to analyze data.
• Monitor data systems to insure data quality and system performance.
• Troubleshoot and mitigate data quality problems.
• Establish and document data system maintenance plans for tool/data support.