High-precision simulations based on first principles are a cornerstone of the LHC physics programme. As we approach the high-luminosity phase of the LHC, however, the demand for both accuracy and speed is pushing traditional simulation pipelines to their limits. This motivates a broader shift towards modern computing paradigms: machine learning for more efficient numerical evaluations, and hardware-aware implementations for scalable deployment. After introducing the basic structure of the Monte Carlo simulation chain and the relevant machine-learning concepts, I will present recent progress along three complementary directions: neural importance sampling as implemented in the MadNIS framework; machine-learned surrogate models for expensive amplitude calculations; and GPU-based implementations designed for large-scale event generation. Taken together, these developments pave the way towards a new generation of LHC simulation tools — faster, smarter, and cooler.