Are there any known libraries/approaches for converting ORC files to Parquet files? Otherwise I am thinking of using Spark to import an ORC into a dataframe then output into parquet file
You mentioned using Spark for reading ORC files, creating DataFrames and then storing those DFs as Parquet Files. This is a perfectly valid and quite efficient approach!
Also depending on your preference, also your use case, you can use even Hive or Pig[may be you can throw-in Tez for a better performance here] or Java MapReduce or even NiFi/StreamSets [depending on your distribution]. This is a very straightforward implementation and you can do it whatever suits you best [or whatever you are most comfortable with :)]