Here is one way to solve this.
import org.apache.spark.sql.Row
import org.apache.spark.sql.functions._
import scala.collection.mutable.WrappedArray
val data = Seq((Seq(1,2,3),Seq(4,5,6),Seq(7,8,9)))
val df = sqlContext.createDataFrame(data)
val first = df.first
// use a pattern match to deferral the type
val mapped = first.getAs[WrappedArray[Int]](0)
// now we can use it like normal collection
mapped.mkString("\n")
// get rows where has array
val rows = df.collect.map {
case Row(a: Seq[Any], b: Seq[Any], c: Seq[Any]) =>
(a, b, c)
}
rows.mkString("\n")
Answer from Rockie Yang on Stack OverflowHow to iterate scala wrappedArray? (Spark) - Stack Overflow
apache spark - How to bring Scala DataFrame to Python and vice versa without registering a view? - Stack Overflow
Wrapped My Spark
How to convert wrappedArray column in spark dataset to java array? - Stack Overflow
Videos
Just got my '23 spark Trixx 2up back from the shop (I hit a log and put a hole in the hull) and spent the day putting on the zoovie wrap from scsunlimited
If you have a WrappedArray there are several ways to convert it to a Java collection.
You can leverage scala.collection.JavaConverters helper methods:
WrappedArray<Long> wrappedArray = row.getAs("ids");
List<Long> list = scala.collection.JavaConverters.seqAsJavaList(wrappedArray.seq());
Alternatively, you can use the Row.getList method to obtain directly a Java List:
List<Long> list = row.getList(row.fieldIndex("ids"));
To convert scala wrappedArray column to Java list in java spark. here line is of type Row.
List<Long> list = scala.collection.JavaConverters.seqAsJavaList(line.getAs("fieldName")));
Got the car a couple weeks ago and not much service was reported so figured Iโd do them. What can you tell from these old ones? First pict is driver side, second is passenger. Also found some oil in the turbo to intercooler hose in the passenger side, but the drivers side was dry and dusty.. Saw on rennlist that it is normal, and an overfill on an oil change makes it worse?