Working AI: At the Office with Research Engineer Will Wolf

 

How did you first get started in AI?

I’ve always loved mathematics. When I was a teenager, I was what I call a semi-professional internet poker player. This left three profound marks on my life: 1) The pursuit of trying to understand human behavior through numbers is indeed fascinating, 2) The internet is amazing; I can sit in my bedroom and become one of the best in the world at what I do with resources that are largely available for free, 3) I learned how I learn best. I found a mentor who helped me get up to speed in poker, so when I got started in AI, I hired a coach, a top-100 Kaggler, to work with me one-on-one. No one on the Kaggle forums really understood what I wanted and why it would be effective.

About two years after finishing school, I had lunch with a colleague at Google who encouraged me to learn to code. I spent the next three months at the kitchen table learning data science, i.e. R, Python, algorithms, machine learning, and refreshing on statistics. After a couple of stints as a backend dev and a data scientist, I took a year off, moved to Morocco, and self-studied machine learning for 50 hours a week. I came back to New York a year and a half ago and started at ASAPP.

 

What are you currently working on at ASAPP?

At ASAPP, we look for industries with massive economic opportunity, systemic inefficiency, and large amounts of data. Then we work to improve them with AI-native solutions. Our first product deals in enterprise interaction. As a research team, we work principally on problems in NLP.

I spent my first year at ASAPP on our machine learning engineering team because I wanted to mature significantly as an engineer, experience building the ML component of a product that is used by millions of customers and learn to think of not just code, but organizational process. Ultimately, my passion is mathematics, so a few months ago, I transitioned to our research team, working on largely pure-research projects with both internal and external academic advisors. In effect, I get paid to pursue expertise in NLP! Then translate this work back into tools that improve our product.

 

 

Take us through your typical workday.

I don’t have an overtly rigid regimen, but I try to get a few hours of daily deep-work time where I can listen to music (like this) and think clearly about a problem.

As a researcher, I don’t have many meetings. When I do, I principally work with other research collaborators, machine learning engineers, and product managers. Without meetings, my day is a mix of reading papers and taking notes on these papers to present them to collaborators, coding up experiments, checking the results of experiments, and summarizing experiments.

I try to check in weekly with my principal investigator, a professor at Cornell. ASAPP has a fantastic set of academic advisors, some of whom are relatively active at the company, across MIT, Cornell, Harvard and NYU.

I try to maintain a balance of things that are working and things that aren’t. If a certain project is experiencing friction, be it data issues, tooling issues, or simply nothing appearing to work, I find it important to have another project to fall back on, where the sailing will be smoother. Another example is being low in the weeds of some complicated implementation vs. reading a paper, which is typically an exercise in understanding higher-level ideas.

 

What tech stack do you use?
  • Atom: I like the selection of packages, and I’ve built an immensely useful one myself that I could hardly live without.
  • Jupyter Lab: I probably hang out in JL too much. When working with neural net frameworks like PyTorch, I find myself doing a lot of “check the size on that, compute the norm of that, inspect the gradient of that,” to which JL is quite amenable.
  • Python
  • AWS GPU instances: We have a fantastic set of internal tools for launching AWS instances. I work in Atom and then rsync up/down to the instance, again with internal tools. Basically, I’m just running aliased commands like: `launch`, `push`, `pull` (to launch instances, push local code onto a remote server, and pull down log and model files, respectively) and working out of Atom as I would locally, and everything works.
  • PyTorch: I build a lot of RNN-type models in PyTorch for classification, retrieval, generation.

 

 

What did you do before working in AI? How does it factor into your work now?

I traveled a lot before I started working at ASAPP. In travel, I learned that life isn’t all that serious, and it’s important to enjoy the ride. For me, this means going deep into the field I love. In addition, traveling made it clear that AI is a thing that can, should, and will have an impact on the world’s problems, not just ours. Not just the US, not just Europe, but all corners of the world at large. It’s important to keep this expansive picture in mind when parsing both the good and bad potentials of artificial intelligence.

 

How do you keep learning?

For me, learning is about habit. If I’m not with friends, at work, taking a walk, seeing a movie, or eating a meal, I don’t know how to do much else besides open up a textbook and keep on learning. Furthermore, I’m deliberate about it: I try to be honest with myself about what I need to learn, the best way for me to learn it, and the constraints that are realistically involved. This is what led me to move to Morocco for a year and study. This, and likely only this, gave me the concentrated time to obtain the skills I needed to get my current job.

 

 

What AI trend are you most excited about?

It seems as if a “neural nets won’t save us on their own” sentiment is sweeping over AI practitioners. I fully identify with this. More and more I see the need to think harder about the structure of the problem and the generative process of the data we are trying to model. In effect, a model is a story for how some data are generated, or how a decision about the data is made. If the model doesn’t judiciously try to capture the minutiae of each process, it may be difficult to achieve a correct fit of the model to the data and to arrive at correct decisions about the data.

Though not a point of excitement, necessarily, generative models of audio and video will fundamentally change human politics! I imagine there will be some event to this effect in the next two years. It will fascinating to watch.

Recent attention to bias and fairness in AI is wholly exciting as well, as we all know these problems are important, as well as the consequences of getting them wrong.

 

Why did you choose to work in industry vs. academia?

I don’t have a PhD, so a career in academia has never been in my purview.

Working on ASAPP’s research team in industry, I have what I find to be the perfect balance: the freedom to learn, fantastic peers, not too many external pressures, the chance to see the impact my work on our internal and external consumers, all while living in beautiful NYC!

 

 

What advice do you have for people trying to break into AI?

Be the tornado. With the internet, arXiv, and MOOCs, there is simply nothing stopping you. If you want a career in AI, do it! Start tonight, and if you really do enjoy it, just keep going.

Go to the source: Favor textbooks over blog posts, and favor papers over blog posts. In other words, go to the place the technique in question was born. Anything else, and your learning is passing through additional layers of interpretation on the part of the intermediaries, like the blog post author, who may be wrong!

Be social! The AI community is fantastic. Meet people! Chat on Twitter. Don’t do AI alone.

Will Wolf is a Research Engineer at ASAPP. You can find him on: on Twitter, Linkedin, and his personal website.

 

Do you know someone who’s hard at work in AI? Nominate your friend, coworker, or idol by sending us a note at hello@deeplearning.ai!

Share