I have a very large AVRO schema with this record as one of its types:
{
"namespace": "com.pingcap.simple.avro",
"name": "DataType",
"type": "record",
"docs": "each column's mysql type information",
"fields": [
{
"name": "mysqlType",
"type": "string"
},
{
"name": "charset",
"type": "string"
},
{
"name": "collate",
"type": "string"
},
{
"name": "length",
"type": "int"
},
{
"name": "decimal",
"type": ["null", "int"],
"default": null
},
{
"name": "elements",
"type": [
"null",
{
"type": "array",
"items": "string"
}
],
"default": null
},
{
"name": "unsigned",
"type": ["null", "boolean"],
"default": null
},
{
"name": "zerofill",
"type": ["null", "boolean"],
"default": null
}
]
}
What I'm seeing when I try to deserialize this using the following code:
DatumReader<DDL> reader = new SpecificDatumReader<>(DDL.class);
Decoder decoder = DecoderFactory.get().jsonDecoder(DDL.getClassSchema(), new String(data));
return reader.read(null, decoder);
Is that the fields marked as default the MUST be in the data otherwise I get an exception. For instance: org.apache.avro.AvroTypeException: Expected field name not found: decimal
Here is that specific part of the data:
"dataType": {
"mysqlType": "int",
"charset": "binary",
"collate": "binary",
"length": 11,
"decimal": null,
"elements": null,
"unsigned": null,
"zerofill": null
}
If I remove any of those nullable fields, I get the exception.
I am using the tools to generate the classes of the schema:
java -jar ./avro-tools-1.11.1.jar compile schema ./schema.json src/main/java/
Anyone know why the defaults are not being picked up and need to be explicitly encoded?