DOWNLOAD PRESENTATIONWATCH VIDEOThe computational capability of CPUs is rapidly advancing, with increasing core counts, and increasing computational resources per core. At the same time, the capacity of high-speed memory accessible to the CPU (both on package and near-package) is growing exponentially over time. In this talk, we discuss how these trends strongly favor memory-intensive workloads with a high degree of parallelism such as large-data scientific visualization. We describe a low-level, open source software layer for high performance visualization directly on CPUs (without the need for high-end graphics processors), and show how this library can be used in new and existing applications through a straightforward API. Using this library, it is feasible to render large-scale data with high image quality, and performance comparable to (and in some cases greater than) that of top-ofthe-line GPUs. We also briefly discuss the scalability, resource flexibility, and potential cost advantages of such an approach