Cloud Computing as a disruptive technology, provides a dynamic, elastic and promising computing climate to tackle the challenges of big data processing and analytics. Hadoop and MapReduce is the widely used open source framework in Cloud Computing for storing and processing big data in the scalable fashion. Spark is the latest parallel computing engine working together with Hadoop that exceeds MapReduce performance via its in-memory computing and high level programming features. In this paper, we have created a seismic data analytics cloud platform on top of Hadoop and Spark, and experimented the productivity and performance for with a few basic but representative seismic data processing algorithms. We created a variety of seismic processing templates to simplify the programming efforts in implementing scalable seismic data processing algorithms by hiding the complexity of parallelism. The Cloud platform generates a complete Spark application based on user's program and configurations, and allocate resources to meet the program requirements.