You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Dataset<Row> dataset= new DatasetDDBReader().read(new FilmDeserializer(),ddbconf,session)
How to Write a Dataset from DynamoDB
Extends DDBSerializer and create the fieldList indicating the column Name and the
the specific type.
create The DDBJobConf indicating the dynamo db details
read and pass the Dataset to the DatasetDDBWriter function
Film Dynamo Serializer
public class FilmSerializer extends DDBSerializer {
@Override
protected List<DDBField> fieldMap() {
return Arrays.asList(
new DDBField("Film", StringType.class),
new DDBField("Genre", StringType.class),
new DDBField("Lead Studio", StringType.class),
new DDBField("Audience score %", IntegerType.class),
new DDBField("Profitability", DecimalType.class),
new DDBField("Rotten Tomatoes %", StringType.class),
new DDBField("Worldwide Gross", StringType.class),
new DDBField("Year", IntegerType.class),
new DDBField("Actors", StringType.class)
);
}
}
session=SparkSession.builder().master("local").getOrCreate();
inputDataset=session.read.option("header","true").csv("./film.csv);
new DatasetDDBWriter().write(new FilmSerializer(),ddbconf,session);