Spring Kafka该类不在受信任的软件包中

在库更新之前的我的Spring Boot /

Kafka应用程序中,我使用以下类org.telegram.telegrambots.api.objects.Update将消息发布到Kafka主题。现在,我使用以下内容org.telegram.telegrambots.meta.api.objects.Update。如您所见-

它们具有不同的软件包。

重新启动应用程序后,我遇到了以下问题:

[org.springframework.kafka.KafkaListenerEndpointContainer#1-0-C-1] o.s.kafka.listener.LoggingErrorHandler : Error while processing: null

org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition telegram.fenix.bot.update-0 at offset 4223. If needed, please seek past the record to continue consumption.

Caused by: java.lang.IllegalArgumentException: The class 'org.telegram.telegrambots.api.objects.Update' is not in the trusted packages: [java.util, java.lang, org.telegram.telegrambots.meta.api.objects]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).

at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:139) ~[spring-kafka-2.1.8.RELEASE.jar!/:2.1.8.RELEASE]

at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:113) ~[spring-kafka-2.1.8.RELEASE.jar!/:2.1.8.RELEASE]

at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:221) ~[spring-kafka-2.1.8.RELEASE.jar!/:2.1.8.RELEASE]

at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:967) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.internals.Fetcher.access$3300(Fetcher.java:93) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1144) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:993) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:527) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:488) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1155) ~[kafka-clients-1.1.0.jar!/:na]

at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1115) ~[kafka-clients-1.1.0.jar!/:na]

at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:699) ~[spring-kafka-2.1.8.RELEASE.jar!/:2.1.8.RELEASE]

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_171]

at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_171]

at java.lang.Thread.run(Thread.java:748) [na:1.8.0_171]

这是我的配置:

@EnableAsync

@Configuration

public class ApplicationConfig {

@Bean

public StringJsonMessageConverter jsonConverter() {

return new StringJsonMessageConverter();

}

}

@Configuration

public class KafkaProducerConfig {

@Value("${spring.kafka.bootstrap-servers}")

private String bootstrapServers;

@Bean

public Map<String, Object> producerConfigs() {

Map<String, Object> props = new HashMap<>();

props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);

props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);

props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, JsonSerializer.class);

props.put(ProducerConfig.MAX_REQUEST_SIZE_CONFIG, 15000000);

return props;

}

@Bean

public ProducerFactory<String, Update> updateProducerFactory() {

return new DefaultKafkaProducerFactory<>(producerConfigs());

}

@Bean

public KafkaTemplate<String, Update> updateKafkaTemplate() {

return new KafkaTemplate<>(updateProducerFactory());

}

}

@Configuration

public class KafkaConsumerConfig {

@Value("${kafka.consumer.max.poll.interval.ms}")

private String kafkaConsumerMaxPollIntervalMs;

@Value("${kafka.consumer.max.poll.records}")

private String kafkaConsumerMaxPollRecords;

@Value("${kafka.topic.telegram.fenix.bot.update.consumer.concurrency}")

private Integer updateConsumerConcurrency;

@Bean

public ConsumerFactory<String, String> consumerFactory(KafkaProperties kafkaProperties) {

return new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties(), new StringDeserializer(), new JsonDeserializer<>(String.class));

}

@Bean

public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(KafkaProperties kafkaProperties) {

kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, kafkaConsumerMaxPollIntervalMs);

kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, kafkaConsumerMaxPollRecords);

ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();

factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);

factory.setConsumerFactory(consumerFactory(kafkaProperties));

return factory;

}

@Bean

public ConsumerFactory<String, Update> updateConsumerFactory(KafkaProperties kafkaProperties) {

return new DefaultKafkaConsumerFactory<>(kafkaProperties.buildConsumerProperties(), new StringDeserializer(), new JsonDeserializer<>(Update.class));

}

@Bean

public ConcurrentKafkaListenerContainerFactory<String, Update> updateKafkaListenerContainerFactory(KafkaProperties kafkaProperties) {

kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_INTERVAL_MS_CONFIG, kafkaConsumerMaxPollIntervalMs);

kafkaProperties.getProperties().put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, kafkaConsumerMaxPollRecords);

ConcurrentKafkaListenerContainerFactory<String, Update> factory = new ConcurrentKafkaListenerContainerFactory<>();

factory.getContainerProperties().setAckMode(AckMode.MANUAL_IMMEDIATE);

factory.setConsumerFactory(updateConsumerFactory(kafkaProperties));

factory.setConcurrency(updateConsumerConcurrency);

return factory;

}

}

application.properties

spring.kafka.bootstrap-servers=${kafka.host}:${kafka.port}

spring.kafka.consumer.auto-offset-reset=earliest

spring.kafka.consumer.group-id=postfenix

spring.kafka.consumer.enable-auto-commit=false

spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer

如何解决此问题并使Kafka将旧消息反序列化为新消息?

这是我的听众

@Component

public class UpdateConsumer {

@KafkaListener(topics = "${kafka.topic.update}", containerFactory = "updateKafkaListenerContainerFactory")

public void onUpdateReceived(ConsumerRecord<String, Update> consumerRecord, Acknowledgment ack) {

//do some logic here

ack.acknowledge();

}

}

回答:

请参阅文档。

从版本2.1开始,类型信息可以在记录标题中传递,从而允许处理多种类型。另外,可以使用Kafka属性配置序列化器/解串器。

JsonSerializer.ADD_TYPE_INFO_HEADERS(默认为true);

设置为false可禁用JsonSerializer的此功能(设置addTypeInfo属性)。

JsonDeserializer.KEY_DEFAULT_TYPE; 如果不存在标题信息,则用于密钥反序列化的后备类型。

JsonDeserializer.VALUE_DEFAULT_TYPE; 如果没有标题信息,则为值的反序列化的后备类型。

JsonDeserializer.TRUSTED_PACKAGES(默认java.util,java.lang);

逗号分隔的允许反序列化的软件包模式列表;*表示全部反序列化。

默认情况下,序列化程序会将类型信息添加到标头中。

请参阅引导文档。

同样,您可以禁用JsonSerializer在标题中发送类型信息的默认行为:

spring.kafka.producer.value-

serializer=org.springframework.kafka.support.serializer.JsonSerializer

spring.kafka.producer.properties.spring.json.add.type.headers=false

或者,您可以将类型映射添加到入站消息转换器中,以将源类型映射到目标类型。

话虽如此,您使用的是哪个版本?

以上是 Spring Kafka该类不在受信任的软件包中 的全部内容, 来源链接: utcz.com/qa/425246.html

回到顶部