Recordings
I have given dozens of talks about new papers over the years but only couple of more general recorded talks, here they are:
Machine Learning in Recommender Systems (2018, EN)
Joint talk given with Pavel Kordík and Ivan Povalyev about uses of ML in recommender systems.
- 2018 was not that far from the Inception moment in 2013 and most models were still pure classical ML
- My section is about how to use Deep Learning (yes DL was the hype :D) in Recommenders and starts here: https://youtu.be/_YR3Osnl_Dc?t=1620
So if you want, feel free to take a jump into the past and see:
- my work on Online optimization of Hyperparams using Evolutionary Strategy with surrogate modeling by Gaussian Mixtures
- starts at https://youtu.be/_YR3Osnl_Dc?t=2905
- you can read more in my thesis: Optimization of Recommender Systems
- Feed-forward Nets
- RNNs - GRU, LSTMs
- combining embeddings in Sparse Denoising Auto-Encoders
- TSNE projections of Inception v3 embeddings to blow the audience mind :D
- RL examples like Bandits (HybridLinUCB)
- Offline evaluation with the fancy techniques like Doubly Robust Off-Policy value estimation
When I am watching this in 2025 large part of it is actually still state of the art :D
- the online optimization algo is still running in production
- SASRec is just GRU4Rec with a transformer block
- offline evals are still hard to use unless you have part of your traffic running on random sampling
Building a Production Recommender at Scale (2022, EN)
General overview of how to build a recommender.
Does not go deep into any part but covers:
- basic architecture
- data
- brief overview of basic and more complex models
- evaluation
- testing
- deployment
- monitoring
O budoucnosti programování (2025, Podcast in Czech)
I talk about my views on the “Future of Programming” regarding recent AI advances:
-
why programmers should strive to understand business more:
- speeding up companies still follows Amdahl’s law = you do not get massive gains if you don’t remove the bottlenecks
- the bottlenecks are typically meetings that delay the execution loop
- => the way to get speed ups from better AI for coding is to empower programmers to do more iterations by themselves
- => programmers have to understand business goals of the project to be able to iterate without getting sign offs from management
- in other words it does not matter whether you implement the feature in 5 minutes or 5 hours if you still have to wait till next day to get the next steps signed off by you product manager
-
why does it still makes sense to hire Juniors even if we have Cursor:
- because they have AGI + are capable of continuous learning unlike LLMs
-
and more like how we use LibreChat, n8n, MCP servers etc.
[Outdated] Future of AI regulation in EU (Artificial Intelligence Act) (2022, EN)
- high level overview of the AI Act draft
- the actual AI act in effect now has some differences and it’s not clear now (in 2025) how exactly will the implementation look like, so I would recommend newer sources instead if you’re interested