Breaking it into offline vs. online environments, and candidate retrieval vs. ranking steps.
Pointers to think through your methodology and implementation, and the review process.
Three documents I write (one-pager, design doc, after-action review) and how I structure them.
Access, serving, integrity, convenience, autopilot; use what you need.
What the top teams did to win the 36-hour data hackathon. No, not machine learning.
What questions do they answer? How do they compare? What open-source solutions are available?
Checking for correct implementation, expected learned behaviour, and satisfactory performance.
Updating our FastAPI app to let users select options and download results.
I couldn't find any guides on serving HTML with FastAPI, thus I wrote this to plug the hole on the internet.
I wanted to add my recent writing to my GitHub Profile README but was too lazy to do manual updates.
After this article, we'll have a workflow of tests and checks that run automatically with each git push.
A curious discussion made me realize my expert blind spot. And no, Airflow is not late.
Can maintaining machine learning in production be easier? I go through some practical tips.
I thought deploying machine learning was hard. Then I had to maintain multiple systems in prod.
OMSCS CS6200 (Introduction to OS) - Moving data from one process to another, multi-threaded.
OMSCS CS6300 (Software Development Process) - Java and collaboratively developing an Android app.