-------------
|.?.+?(.%.$$|
|+.)?\@?+!?$|
--+----------
Hey, I’m Jan (or janEbert on GitHub)! I like optimizing loops, thinking in parallel, applying mathematics, and hardcore challenges. You can find the best compilation of my scientific work on Google Scholar.
Ever since my Bachelor’s thesis in 2017, I’ve been heavily invested in deep learning; my favorite projects back from university include:
- Super Mario World level generation framework
- Reinforcement learning for Durak
- Generative adversarial network framework
I’m currently working at the Helmholtz AI division of the Jülich Supercomputing Centre, where I’m leading the efforts of the pre-training work package in the TrustLLM EU project.
Some of my old work projects include:
- Core engineering role in pre-training work package of OpenGPT-X, implementing, ablating and optimizing an LLM training, balancing between compute-efficiency during training and inference.
- Development of Diff-STRAL, a differentiable ray tracer for learning heliostat (mirror) shapes from their reflections, published in Nature Communications.
- AlphaNumerics Zero: Using reinforcement learning to optimize a numerical solver.
- Reproducing OpenAI’s DALL-E model, which also resulted in me co-founding LAION.
- MLCommons MLPerf benchmarks.