Cambridge Hardware intern
Cambridge, Cambridgeshire, United Kingdom
Save
Overview
The fast growth of the interconnect technologies is continuously driving the efficiency and performance in AI infrastructure. At Microsoft Research@198, we are exploring novel network and memory technologies to empower next-generation AI infrastructure. We are looking to hire an intern to contribute to the design and development of an FPGA platform to drive our exploration. The successful candidate will work within a leading multidisciplinary industrial research team revolutionizing the future sustainable AI infrastructure. In this stimulating environment, the intern will have the opportunity to learn and grow digital hardware design and debug skills while testing and evaluating FPGA-based emulation system. For example, this internship will include design, emulate, and evaluate new memory subsystem designs using FPGAs, as well as the software stack that will run on this hardware platform.
Qualifications
Required/Minimum Qualifications:
- Being enrolled in a Master or PhD program of computer science, computer engineering, electrical and electronics engineering, or other related fields.
Other Requirements:
- Experience working on research projects related to digital hardware design
- Demonstrated experience in design and development cycle of FPGA- or ASIC- based hardware systems, i.e. from RTL coding to system-level testing and performance evaluation
- Thorough understanding of digital logic design concepts from system level to low level details
- Good RTL coding and debugging skills (Verilog/System Verilog preferred)
Preferred/Additional Qualifications:
- Experience with ML accelerators, memory controller, network physical layer or gigabits transceivers on modern FPGAs
- Experience with software programming (C/C++/Python)
Responsibilities
Successful candidates will be working in a small, multi-disciplinary team of experts from the fields of FPGAs, optics, networking, and distributed systems and will have an opportunity to get hands-on experiences of developing AI inference hardware systems for future AI infrastructure in data centers. Candidates will be expected to implement a part of system design team, using RTL for design and verification, and perform experiments on a FPGA hardware to evaluate system performance and behaviour.