An Unofficial Guide to Prepare for a Research Position Application
The Core Principle: Understanding Over Implementation
The single biggest differentiator between successful and unsuccessful candidates isn't whether they completed the technical assessment, it's whether they understand what they built, and are able to clearly explain their understanding.
Many candidates can implement complex systems using tutorials, documentation, and AI coding assistants. Far fewer can explain why each design choice was made, how they contrast against other possible choices, what the limitations are, and how they'd improve it given more time.
What We Are Looking For
1. Questions That Distill the Problem Space
Strong candidates don't just answer our questions, they ask additional insightful, thought-provoking questions that demonstrate a deeper understanding of the problem space, or a strong ability to unpack, decipher, and clarify said problem space. When presented with a vague problem, they identify the core uncertainty and attack it directly.
Before defending your approach, ask yourself: Did I solve the right problem, or just a problem?
2. Prototypes Scoped To The Real Risk
The goal isn't to build everything. It's to build the thing that tests your riskiest assumption.
When discussing your work, be prepared to explain:
- What was the core hypothesis?
- What was the minimum experiment needed to test it?
- What did you learn, and what would you do next?
3. Explaining The Reasoning For Your Decisions
Every choice in your solution should have a reason. Not a perfect reason, but a reason you can articulate and defend.
"I tried X because I expected Y. I observed Z instead, which told me..."
This pattern: hypothesis, test, update is the fundamental reasoning approach to research. Interviewers are looking to see evidence of it in your work.
4. Communication That Lowers Ambiguity
Clarity isn't about saying more. It's about saying what matters.
Strong candidates:
- State conclusions first, then explain
- Identify what they don't know, explicitly
- Stop when the question is answered
Less successful candidates may speak too broadly or hesitate. If you notice you've been speaking for a while without concluding your thought, it's often helpful to pause and reset your approach. We are looking for candidates who want to talk about the interesting details (implementation details, mathematical details) of a problem rather than trying to impress us with exciting sounding ideas that could be tried. Successful interviews are not just a series of questions and answers but evolve into interesting discussions about the ideas presented in the technical assessment. We are also interested to discover your vision of how these ideas might evolve in the future.
5. "Good" Rabbit Holes
We are looking for people who like to think deeply about ideas, who are interested in the small details and enjoy discussing the implications of possible research decisions. Some interviewers (although not all) look forward to deviations off the beaten-track. In other words, if you find a shared interest in a topic/idea/approach, you should consider expanding thereupon. Interviewers have typically seen solutions to the technical set dozens of times: if you can find something to set you apart, that will stand well in your stead.
6. Technical Capability versus Ideation And Creativity
Talented researchers typically strike a balance between being strong engineers and having actionable creative ideas. The technical set is designed to test both engineering and creativity. A common pitfall is to lean too heavily on either of these ends of the scale:
Being technically talented is often easy to demonstrate, and you should be proud of the hard work you have put into slick, performant solutions. However, often in AI research, "good enough" is good enough and you need to know when to stop perfecting and tweaking in favour of other aspects of your work/solution (e.g., writing up, demonstrating results creatively, etc.)
In 2026 it is surprisingly easy to seem creative and to have good ideas. However, ideas that remain unactionable are worthless. In some sense researchers are in the business of ideas that they can pursue. If you have ideas that you want to talk to us about, we always enjoy hearing what you have to say. Be very careful not to overshoot the mark in terms of ambition, though: it is much better to have interesting but simple ideas that are not overly grandiose or ambitious, but which you can take clear steps to pursue (i.e., build hypotheses, test with experiments, make observations, and explain clearly).
A very important skill for a machine learning researcher, is to not only be able to come up with interesting ideas and implement them in code, but to then have the skill, persistence and patience to get the idea working well. This requires developing a strong intuition about what might not be working, how to test that intuition and what needs to be tweaked, like hyperparameters or architectural changes. This is a very difficult skill to teach and only really comes after years of building and playing with machine learning models yourself. We recognize that deep intuition in this area is developed over time, and we aim to support your growth in this area at Sakana.
7. Depth Over Breadth
We've observed too many candidates attempting numerous ideas or minor deviations from existing paradigms, instead of deeply exploring a single, novel concept. Even in problem sets where we explicitly encourage modifications, candidates often submit minor tweaks to existing work for questions specifically designed to test inventive ability and boundary-pushing. For instance, the "safe" approach involves testing different activation functions, RoPE positional embeddings, local attention, or MoE, resulting in a marginal improvement over the baseline. However, a single, slightly unconventional, yet well-motivated modification (even if it doesn't improve the baseline) is significantly better than implementing several standard changes. This allows for a much richer discussion during the interview.
Take away: You can get away with doing less in the Technical Problem Set if you go deeper into the parts you choose to explore.
Corollary: Don't try to add more shallow experiments to compensate for the lack of depth!
A Practical Checklist
One Week Before
- Re-read your submission as a skeptical reviewer would, sometimes there's a long gap between finishing the submission and the interview and some candidates forget some of what they tried. Not a good look.
- For each experiment: hypothesis, method, result, limitation, write it down
- Identify the weakest points. Prepare to discuss them honestly
Day Before
- Practice explaining each section in two minutes, then thirty seconds
- Review any [ML] fundamentals you might have glossed over, especially those related to what you did in your solution sets (e.g., if you experimented with different optimizers it would be helpful to understand the differences between SGD and Adam in detail).
- Synthesize your research interests into a concrete statement
During The Interview
- If you don't understand a question, ask
- If you don't know an answer, say so, then reason through it
- Proactively mention limitations before being asked
- Show curiosity, not just competence
Final Thoughts
While we expect candidates to use AI assistance for preparing their applications we encourage you to be upfront and explicit about when, how and why you used AI and be able to articulate and distinguish your contributions. Even if you lean heavily on AI, which is perfectly normal recently, you still need to read and understand everything that the AI produces and be able to show that in the interview.
The technical assessment is a conversation starter. The real interview is about how you think.
Candidates who succeed share a common trait: they think like researchers, not implementers. They question assumptions, evaluate results critically, communicate precisely, and carry genuine curiosity about unsolved problems.