SpringBootActuator
编者注
由于开发一个对性能要求很强的后台应用,突然发现部署到aliyun发生问题,而普通笔记本没有任何问题,则需要持续一段时间的监控定位函数执行时间,分析过程,并添加健康检查的可视化内容。
Actuator
[Spring Boot Actuator: Production-ready features](Part V. Spring Boot Actuator: Production-ready features) SpringBoot Actuator
Spring Boot 包含了Spring Boot Actuator。本段回答一些经常出现的。
更改Actuator Endpoints的Http端口或者地址
在独立应用中,Actuator HTTP端口默认与应用HTTP端口一致。希望Actuator使用不同端口,设置扩展属性:management.server.port
。设置完全不同的网络地址(例如:当你有一个用于管理的内部网卡和使用应用的外部网卡),你可以设置management.server.address
提供一个服务可以绑定的有效地址。
更多详情,请看ManagementServerProperties的源代码,段落 54.2 "自定义管理服务端口"在生产就绪特性端口中。
自定义‘whitelabel’错误页
Spring Boot 安装一个'whitelabel'错误页面,如果你发生服务错误将会在浏览器看到() todo
数据清洗
todo
Metrics
Micrometer
Spring Boot Actuator提供Micrometer依赖管理和自动配置,应用Metrics支持很多监听系统:
Elastic
Elasticsearch
install
# Download and install the public signing keywget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
# Installing from the APT repository
sudo apt-get install apt-transport-https
# Save the repository definition to /etc/apt/sources.list.d/elastic-7.x.list
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
# install
sudo apt-get update && sudo apt-get install elasticsearch
run as service
sudo systemctl daemon-reloadsudo systemctl enable elasticsearch.service
sudo systemctl start elasticsearch.service
sudo systemctl status elasticsearch.service
check
curl http://localhost:9200
注意:如果碰到,请等待一段时间
kibana server is not ready yet
检查SpringBoot的状态
https://cloud.ibm.com/docs/java?topic=java-spring-metrics
实施过程
Unexpected response body
问题描述
019-06-27 19:14:51.534 WARN 49769 --- [trics-publisher] i.m.c.instrument.push.PushMeterRegistry : Unexpected exception thrown while publishing metrics for ElasticMeterRegistryjava.lang.RuntimeException: java.lang.IllegalArgumentException: Unexpected response body: <no response body>
at io.micrometer.elastic.ElasticMeterRegistry.determineMajorVersionIfNeeded(ElasticMeterRegistry.java:245) ~[micrometer-registry-elastic-1.1.5.jar:1.1.5]
at io.micrometer.elastic.ElasticMeterRegistry.publish(ElasticMeterRegistry.java:187) ~[micrometer-registry-elastic-1.1.5.jar:1.1.5]
at io.micrometer.core.instrument.push.PushMeterRegistry.publishSafely(PushMeterRegistry.java:48) ~[micrometer-core-1.1.5.jar:1.1.5]
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) ~[na:1.8.0_202]
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308) ~[na:1.8.0_202]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.access$301(ScheduledThreadPoolExecutor.java:180) ~[na:1.8.0_202]
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.run(ScheduledThreadPoolExecutor.java:294) ~[na:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) ~[na:1.8.0_202]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) ~[na:1.8.0_202]
at java.lang.Thread.run(Thread.java:748) ~[na:1.8.0_202]
Caused by: java.lang.IllegalArgumentException: Unexpected response body: <no response body>
at io.micrometer.elastic.ElasticMeterRegistry.getMajorVersion(ElasticMeterRegistry.java:253) ~[micrometer-registry-elastic-1.1.5.jar:1.1.5]
at io.micrometer.elastic.ElasticMeterRegistry.determineMajorVersionIfNeeded(ElasticMeterRegistry.java:243) ~[micrometer-registry-elastic-1.1.5.jar:1.1.5]
... 9 common frames omitted
分析
经过断点查看,怀疑由于缺少用户名密码,导致请求没有应答。
查看Kibana Home页面,Security Settings,描述需要管理用户。
Kibana中文用户手册
结论:经过检查发现,数据需要发送给Elasticsearch,而不是Kibana。
连接Elasticsearch无法连接
由于Elasticsearch默认是测试模式,只配置了本地连接,而没有配置外网连接则需要修改配置:Ubuntu配置路径/etc/elasticsearch/elasticsearch.yml
,需要修改如下两行,如果只修改network.host
,则导致服务无法启动,需要/var/log/elasticsearch
目录下查看日志,会说明必须设置discovery.seed_hosts
network.host: 0.0.0.0discovery.seed_hosts: ["127.0.0.1", "[::1]"]
测试
curl http://ip:9200
成功
index找不到
{ "error": {
"root_cause": [
{
"type": "index_not_found_exception",
"reason": "no such index [39.104.74.230:9200]",
"resource.type": "index_or_alias",
"resource.id": "39.104.74.230:9200",
"index_uuid": "_na_",
"index": "39.104.74.230:9200"
}
],
"type": "index_not_found_exception",
"reason": "no such index [39.104.74.230:9200]",
"resource.type": "index_or_alias",
"resource.id": "39.104.74.230:9200",
"index_uuid": "_na_",
"index": "39.104.74.230:9200"
},
"status": 404
}
检查
访问http://localhost:9200/_cat/indices/
,得到如下结果:
green open .kibana_task_manager dRpmIGChQUOBrQ1GBXL16Q 1 0 2 0 12.8kb 12.8kbgreen open .monitoring-kibana-7-2019.06.28 8gVdZOkTSie7TylGLjAhUg 1 0 474 0 283.1kb 283.1kb
green open .kibana_1 CCXT1xH8Se6PG4aF4z_LJg 1 0 7 2 59.3kb 59.3kb
green open .monitoring-es-7-2019.06.28 uJyGhlnjTHWHdt73_wxnuA 1 0 3790 540 3.4mb 3.4mb
尝试创建Index
使用Kibana的Console尝试创建Index,但是描述发生错误,创建如下索引:
PUT 39.104.74.230:9200
错误如下:
{ "error": {
"root_cause": [
{
"type": "invalid_index_name_exception",
"reason": "Invalid index name [39.104.74.230:9200], must not contain ':'",
"index_uuid": "_na_",
"index": "39.104.74.230:9200"
}
],
"type": "invalid_index_name_exception",
"reason": "Invalid index name [39.104.74.230:9200], must not contain ':'",
"index_uuid": "_na_",
"index": "39.104.74.230:9200"
},
"status": 400
}
备注:在Springboot配置中,已经填写过index内容,但是没有生效
# 监控配置management.metrics.export.elastic.enabled=true
management.metrics.export.elastic.step=1s
management.metrics.export.elastic.index=springboot-newindex
management.metrics.export.elastic.auto-create-index=true
management.metrics.export.elastic.host=http://xx.xx.xx.xx:9200
修复访问
怀疑是由于host给出的url问题,Spring官方文档并没有添加/
,SpringBoot-57.Metrics-#elastic
management.metrics.export.elastic.host=http://xx.xx.xx.xx:9200/
failed to send metrics to elastic
{ "error": {
"root_cause": [
{
"type": "string_index_out_of_bounds_exception",
"reason": "String index out of range: 0"
}
],
"type": "string_index_out_of_bounds_exception",
"reason": "String index out of range: 0"
},
"status": 500
}
通过断点发现,虽然在客户端设置自动创建,但是客户端给服务端发送的创建信息,返回的确是失败。
String uri = config.host() + ES_METRICS_TEMPLATE; if (httpClient.head(uri)
.withBasicAuthentication(config.userName(), config.password())
.send()
.onError(response -> {
if (response.code() != 404) {
logger.error("could not create index in elastic (HTTP {}): {}", response.code(), response.body());
}
})
// here get error
.isSuccessful()) {
checkedForIndexTemplate = true;
logger.debug("metrics template already exists");
return;
}
异常描述指index不能为0,但是在设置中设置了index
结论
经过检查,竟然是由于开启Astrill VPN导致的结果。关闭VPN,恢复正常。
Springboot配置
添加metrics
management.endpoints.web.exposure.include=metrics
通过http://localhost:8000/actuator/metrics
就能够看到metrics内容
{ "names": [
"jvm.memory.max",
"jvm.threads.states",
"process.files.max",
"jvm.gc.memory.promoted",
"system.load.average.1m",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
"tomcat.global.sent",
"rece.counter", // here
"jvm.buffer.memory.used",
"tomcat.sessions.created",
"jvm.threads.daemon",
"system.cpu.usage",
"jvm.gc.memory.allocated",
"tomcat.global.request.max",
"tomcat.global.request",
"tomcat.sessions.expired",
"jvm.threads.live",
"jvm.threads.peak",
"tomcat.global.received",
"process.uptime",
"tomcat.sessions.rejected",
"process.cpu.usage",
"tomcat.threads.config.max",
"jvm.classes.loaded",
"send.counter", // here
"jvm.classes.unloaded",
"tomcat.global.error",
"tomcat.sessions.active.current",
"tomcat.sessions.alive.max",
"jvm.gc.live.data.size",
"tomcat.threads.current",
"process.files.open",
"jvm.buffer.count",
"jvm.buffer.total.capacity",
"tomcat.sessions.active.max",
"tomcat.threads.busy",
"process.start.time"
]
}
通过该json,确认我所添加的metric字段被加入进去。
kibana信息确认
Management -> Index Patterns -> delete apm-* -> create index pattern -> monitoring-xxx-*
经过该操作,能够正确看到所上传的监控数据内容。但是kibana不太会使用,仅仅能够看到很少的内容
{ "_index": "monitoring-sync-1-2019-07",
"_type": "_doc",
"_id": "pI0urGsBEOofhudoZCtY",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2019-07-01T06:17:29.932Z",
"name": "jvm_threads_states",
"type": "gauge",
"state": "runnable",
"value": 13
},
"fields": {
"@timestamp": [
"2019-07-01T06:17:29.932Z"
]
},
"sort": [
1561961849932
]
}
Springboot 使用
如果想在Springboot当中使用@Timed,则需要如下配置:
import io.micrometer.core.aop.TimedAspect;import io.micrometer.core.instrument.MeterRegistry;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.EnableAspectJAutoProxy;
/// https://micrometer.io/docs/concepts
/// 如果希望使用Timed注解,则必须添加该配置
@Configuration
@EnableAspectJAutoProxy
public class MicrometerConfiguration {
@Bean
public TimedAspect timedAspect(MeterRegistry meterRegistry) {
return new TimedAspect(meterRegistry);
}
}
然后在对应的方法当中添加
@Timed(value = "A_B_C") @Override
public void method() {
...
}
在Spring的Endpoint当中确认是否具备http://localhost:8080/actuator/metrics
,如果要查看该EndPoint,则需要在配置文件当中添加management.endpoints.web.exposure.include=metrics
注意:这里这个方法必须调用,否则json将会按照空值进行忽略
{"names": [
"jvm.memory.max",
"get_attrs", // in here
"jvm.threads.states",
"process.files.max",
"jvm.gc.memory.promoted",
"system.load.average.1m",
"jvm.memory.used",
"jvm.gc.max.data.size",
"jvm.gc.pause",
"jvm.memory.committed",
"system.cpu.count",
"logback.events",
"tomcat.global.sent",
"jvm.buffer.memory.used",
"tomcat.sessions.created",
"jvm.threads.daemon",
"system.cpu.usage",
"jvm.gc.memory.allocated",
"tomcat.global.request.max",
"tomcat.global.request",
"tomcat.sessions.expired",
"jvm.threads.live",
"jvm.threads.peak",
"tomcat.global.received",
"process.uptime",
"tomcat.sessions.rejected",
"process.cpu.usage",
"tomcat.threads.config.max",
"jvm.classes.loaded",
"jvm.classes.unloaded",
"tomcat.global.error",
"tomcat.sessions.active.current",
"tomcat.sessions.alive.max",
"jvm.gc.live.data.size",
"tomcat.threads.current",
"process.files.open",
"jvm.buffer.count",
"jvm.buffer.total.capacity",
"tomcat.sessions.active.max",
"tomcat.threads.busy",
"process.start.time"
]
}
如果Elastic提交正确,能够在Kibana当中看到提交到的json
{ "_index": "monitoring-app-1-2019-07",
"_type": "_doc",
"_id": "ze8Vt2sBEOofhudoP7Ue",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2019-07-03T09:05:51.376Z",
"name": "xxx_xxx_xxx_xxx",
"type": "timer",
"class": "xxxxxxxxx",
"exception": "none",
"method": "messageReceived",
"count": 0,
"sum": 0,
"mean": 0,
"max": 0
},
"fields": {
"@timestamp": [
"2019-07-03T09:05:51.376Z"
]
},
"highlight": {
"name": [
"@kibana-highlighted-field@sync_reciver_format_toObject@/kibana-highlighted-field@"
]
},
"sort": [
1562144751376
]
}
实际使用的两个疑惑
第一个Timed提示的时间单位到底是什么
由于不熟悉micrometer,导致并不能确认提交的elastic的数据是什么单位,编者面临的是如下内容:
{ "_index": "monitoring-sync-1-2019-07",
"_type": "_doc",
"_id": "PXgCu2sBr9quksIjyNJr",
"_version": 1,
"_score": null,
"_source": {
"@timestamp": "2019-07-04T03:24:10.213Z",
"name": "attribute_get",
"type": "timer",
"class": "org.aicfve.sync.delay.service.AttributeDelayQueueService",
"exception": "none",
"method": "getAttributes",
"count": 24,
"sum": 0.41679,
"mean": 0.01736625,
"max": 0.038021
},
"fields": {
"@timestamp": [
"2019-07-04T03:24:10.213Z"
]
},
"sort": [
1562210650213
]
}
sum、mean、max的单位,只能通过源代码进行查看
找到输出json的代码位置,ElasticMeterRegistry下的
// VisibleForTesting Optional<String> writeTimer(Timer timer) {
return Optional.of(writeDocument(timer, builder -> {
builder.append(",\"count\":").append(timer.count());
builder.append(",\"sum\":").append(timer.totalTime(getBaseTimeUnit()));
builder.append(",\"mean\":").append(timer.mean(getBaseTimeUnit()));
builder.append(",\"max\":").append(timer.max(getBaseTimeUnit()));
}));
}
进入getBaseTimeUnit方法,能够看到如下内容
@Override @NonNull
protected TimeUnit getBaseTimeUnit() {
return TimeUnit.MILLISECONDS;
}
由此判断,所得的数值均是毫秒,Timed计算使用TimeUnit是nanoTime(从其他代码部分得知)
Kibana
Kibana是Elastic的数据可视化内容,安全请查询官方网站,注意:7.x版本以后,通过配置文件能够直接修改成为中文语言环境,对于初学者来说友善了很多。
Kibana中重要概念
- Kibana并不存储和接收数据内容
- Kibana仅仅针对数据进行可视化
- 最开始设定index,多个设备数据放到一个查询环境
- Discover 针对当前index进行的查询,新建保存多个
- 可视化针对Discover内容显示,新建保存多个
- 把多个可视化合并在一个数据板显示
Kibana如下在一个可视化下查看多个检测方法的曲线走势
这里涉及到Kibana如何使用。首先编者并没有合并显示,编者使用的是多个可视化,在一个仪表盘当中显示的方法。
Springboot注解命名
建议所有注解的名字均以下划线为结尾
由于MicroMeter在发送过程,将会把A.B修改为A_B,导致程序填写的数值与Kibana操作数据发生问题,不容易建立图表化显示
以上是 SpringBootActuator 的全部内容, 来源链接: utcz.com/z/508532.html