On July 15, In Berlin we got together at AI Plumbers Conference second edition — an open source meetup for low-level AI builders to dive deep into the plumbing of modern AI, from cutting-edge data infrastructure to AI accelerators. Take a look at how it was!
Ever wondered how much companies use their own tools? This is an opportunity to look behind the curtains and see all this AI use cases that Hugging Face has implemented internally. Kicking off our event in Berlin, Vaibhav Srivastav (VB) of Hugging Face with most demo intense talk explaining the implementations of those use cases, what models were used (it’s actually quite a variety), what pipelines and some advice on how to start implementing your own use cases. Go try the example from the talk on HF and get inspired!
Key moments from the talk:
0.29 - Introduction of VB and the use of generative AI and ML in the company’s products
2.25 - The 8 useful AI use cases at Hugging Face that we going to deep dive into
4.24 - Demo of Translation use case
6.20 - Summarization of research papers indexed on HF
7.50 - Emoji generation - SDXL LoRA used on the Hub
8.45 - Demo of Semantic Search on Spaces with natural language
11.18 - Demo of Semantic Search on Daily Papers
13.20 - Structured Generation & Parsing - summarize relevant papers from Archive for the use of researches (highest level of automation and one of the coolest use cases from VB perspective)
16.45 - Demo of SQL Generation
20.05 - Presentation of the entire HF Hub acting as MCP server (Model Context Protocol)
22.36 - Recommendations for using generative AI in your own projects
The presentation slides are available here: