The paper is structured by three guiding questions: First, what (if anything) is Big Data? Second, how has 'Big Data' changed what data is and how data functions within models? Third how has 'Big Data' impacted conception of 'complex phenomena' and prediction? Around these three guiding questions the paper puts forward a narrative of how Big Data could become an approach to what we call 'Artificial Intelligence' today within research on language processing and later image recogntion. The presentation focuses on the industrial contexts in which 'Big Data' could emerge and recovers what actors saw as 'economic imperatives.' Therein, I aim to make 'Big Data' speak within the political economy of the technology industry. I attempt to analyse the intellectual debates around 'probabilistic,' 'rational,' or 'empirical' and 'statistical' approaches to provide a map of different conceptions of 'empiricism' and 'rationalism' that functioned as epistemological premises within AI research/ ML. The paper ends on a few considerations of the tech-industry today and how the industry has reconfigured imaginaries of the future.