A writes:
Recently Iβm trying to build a semantic search system with my own data and I came across your blog post. I found quite a few papers using β[email protected]β as an evaluation metric (e.g. Semantic Product Search by Amazon, Embedding-based Retrieval in Facebook Search by Facebook, Embedding-based Product Retrieval in Taobao Search), but it is unclear how they obtain the total number of relevant documents (or items) for their query-document pairs.
While it is totally possible to hire a lot of annotators to figure out which documents are relevant to a search query, I donβt think that is economically feasible at all. Do you have any idea how engineers in industry figure out the total number of relevant documents (or items) for their query-document pairs? Many thanks!
If I had to build a search engine from scratch, I would:
I think using human annotators can work, but probably only for defects or edge cases, given how costly it is.
Have a question for me? Happy to answer concise questions via email on topics I know about. More details in How I Can Help.
Join 4,500+ readers getting updates on data science, data/ML systems, and career.
Welcome gift: 5-day email course on How to be an Effective Data Scientist π