阿辉的博客

系统 网络 集群 数据库 分布式云计算等 研究

关于activemq启动失败Failed to start Apache ActiveMQ ([localhost, null], java.io.IOException: Detected missing/corrupt journal files referenced by:[0:ExceptionDLQ.Activit yResultPostProcess] 10 messages affected.)的处理

一个跑了满久的activemq停止后再启动就自动退出了,查看日志有以下报错:

2018-06-11 17:54:21,483 | WARN  | Some journal files are missing: [17496, 17495, 17494, 11811, 11807, 11793] | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 17:54:21,704 | ERROR | [0:ExceptionDLQ.ActivityResultPostProcess] references corrupt locations. 10 messages affected. | org.apache.activemq.store.kahadb.MessageDatabase | m
ain
2018-06-11 17:54:21,706 | ERROR | Failed to start Apache ActiveMQ ([localhost, null], java.io.IOException: Detected missing/corrupt journal files referenced by:[0:ExceptionDLQ.Activit
yResultPostProcess] 10 messages affected.) | org.apache.activemq.broker.BrokerService | main
2018-06-11 17:54:21,711 | INFO  | Apache ActiveMQ 5.13.3 (localhost, null) is shutting down | org.apache.activemq.broker.BrokerService | main
2018-06-11 17:54:21,715 | INFO  | Connector openwire stopped | org.apache.activemq.broker.TransportConnector | main
2018-06-11 17:54:21,718 | INFO  | Connector amqp stopped | org.apache.activemq.broker.TransportConnector | main
2018-06-11 17:54:21,721 | INFO  | Connector stomp stopped | org.apache.activemq.broker.TransportConnector | main
2018-06-11 17:54:21,724 | INFO  | Connector mqtt stopped | org.apache.activemq.broker.TransportConnector | main
2018-06-11 17:54:21,727 | INFO  | Connector ws stopped | org.apache.activemq.broker.TransportConnector | main

解决方法:

把conf/activemq.xml文件内以下配置从:

        <persistenceadapter>
            <kahadb directory="${activemq.data}/kahadb"></kahadb>
        </persistenceadapter>

改成:

        <persistenceadapter>
            <kahadb directory="${activemq.data}/kahadb"
                    ignoreMissingJournalfiles="true"
                    checkForCorruptJournalFiles="true"
                    checksumJournalFiles="true"></kahadb>
        </persistenceadapter>

保存后退出。
相关配置的解释:

ignoreMissingJournalfiles 默认:false 忽略丢失的消息文件,false,当丢失了消息文件,启动异常
checkForCorruptJournalFiles 默认:false 检查消息文件是否损坏,true,检查发现损坏会尝试修复
checksumJournalFiles 默认:false 产生一个checksum,以便能够检测journal文件是否损坏。

AMQ可正常启动,可以看到如下日志:

2018-06-11 18:03:46,224 | INFO  | KahaDB is version 6 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,367 | INFO  | Recovering from the journal @19584:19615405 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,369 | INFO  | Recovery replayed 1 operations from the journal in 0.092 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,425 | WARN  | Some journal files are missing: [17496, 17495, 17494, 11811, 11807, 11793] | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,646 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-14-16305-1524666866071-1:1:40:1:1 at corrupt location: 11793:26365268 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,648 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-13-32764-1524672923835-1:1:170:1:1 at corrupt location: 11807:27766123 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,650 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-14-25005-1524672922167-1:1:164:1:2 at corrupt location: 11807:28547936 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,653 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-13-32764-1524672923835-1:1:98:1:1 at corrupt location: 11807:29093398 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,655 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-14-25005-1524672922167-1:1:163:1:1 at corrupt location: 11807:30160163 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,657 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-13-32764-1524672923835-1:1:98:1:3 at corrupt location: 11807:31199074 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,659 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-12-8588-1524672398221-1:1:162:1:1 at corrupt location: 11811:25770150 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,661 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-14-1716-1527773539665-1:1:88:1:1 at corrupt location: 17494:17098861 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,663 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-12-1316-1527773206992-1:1:28:1:1 at corrupt location: 17494:18217707 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,664 | INFO  | [0:ExceptionDLQ.ActivityResultPostProcess] dropped: ID:sh-saas-o2o-o2oweb-online-11-21373-1527773207600-1:1:61:1:3 at corrupt location: 17494:20873455 | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,666 | INFO  | Detected missing/corrupt journal files.  Dropped 10 messages from the index in 0.289 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2018-06-11 18:03:46,675 | INFO  | PListStore:[/data/log/activemq/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2018-06-11 18:03:46,683 | INFO  | JobSchedulerStore: /data/log/activemq/localhost/scheduler started. | org.apache.activemq.store.kahadb.scheduler.JobSchedulerStoreImpl | main
2018-06-11 18:03:48,193 | INFO  | Apache ActiveMQ 5.13.3 (localhost, ID:sh-o2o-pressure-java-online-01-15400-1528711426716-0:1) is starting | org.apache.activemq.broker.BrokerService | main

参考:

kubernetes 日常使用及操作测试


前文已经安装好了一套kubernetes 1.10,下面我们来进行日常使用测试

1. 创建部署及服务

编辑一个yaml文件:

vim nginx.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nginx-router
  namespace: test
spec:
  replicas: 2
  template:
    metadata:
      labels:
        app: nginx-router
    spec:
      containers:
      - name: nginx-router
        image: 172.21.248.242/base/nginx
        ports:
        - containerPort: 80

---
kind: Service
apiVersion: v1
metadata:
  name: nginx-router
  namespace: test
spec:
  selector:
    app: nginx-router
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80

---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx-router-ingress
  namespace: test
  annotations:
    kubernetes.io/ingress.class: traefik
spec:
  rules:
  - host: "nginx.k8s.dev.huilog.com"
    http:
      paths:
      - backend:
          serviceName: nginx-router
          servicePort: 80

# 创建部署及服务:
[root@bs-ops-test-docker-dev-01 dev]# kubectl create -f nginx.yaml 
deployment.extensions "nginx-router" created
service "nginx-router" created
ingress.extensions "nginx-router-ingress" created
[root@bs-ops-test-docker-dev-01 dev]#

(更多…)

kubernetes traefik ingress 安装及配置

Traefik是一款开源的反向代理与负载均衡工具。它最大的优点是能够与常见的微服务系统直接整合,可以实现自动化动态配置。目前支持Docker, Swarm, Mesos/Marathon, Mesos, Kubernetes, Consul, Etcd, Zookeeper, BoltDB, Rest API等等后端模型。
以下是架构图:

需要指出的是,ingress-controllers其实是kubernetes的一部分,ingress就是从kubernetes集群外访问集群的入口,将用户的URL请求转发到不同的service上。Ingress相当于nginx、apache等负载均衡方向代理服务器,其中还包括规则定义,即URL的路由信息,路由信息得的刷新由Ingress controller来提供。

Ingress Controller 实质上可以理解为是个监视器,Ingress Controller 通过不断地跟 kubernetes API 打交道,实时的感知后端 service、pod 等变化,比如新增和减少 pod,service 增加与减少等;当得到这些变化信息后,Ingress Controller 再结合下文的 Ingress 生成配置,然后更新反向代理负载均衡器,并刷新其配置,达到服务发现的作用。
(更多…)

kubernetes V1.10集群安装及配置

1. k8s集群系统规划

1.1. kubernetes 1.10的依赖

k8s V1.10对一些相关的软件包,如etcd,docker并不是全版本支持或全版本测试,建议的版本如下:
– docker: 1.11.2 to 1.13.1 and 17.03.x
– etcd: 3.1.12
– 全部信息如下:

参考:External Dependencies
* The supported etcd server version is 3.1.12, as compared to 3.0.17 in v1.9 (#60988)
* The validated docker versions are the same as for v1.9: 1.11.2 to 1.13.1 and 17.03.x (ref)
* The Go version is go1.9.3, as compared to go1.9.2 in v1.9. (#59012)
* The minimum supported go is the same as for v1.9: go1.9.1. (#55301)
* CNI is the same as v1.9: v0.6.0 (#51250)
* CSI is updated to 0.2.0 as compared to 0.1.0 in v1.9. (#60736)
* The dashboard add-on has been updated to v1.8.3, as compared to 1.8.0 in v1.9. (#57326)
* Heapster has is the same as v1.9: v1.5.0. It will be upgraded in v1.11. (ref)
* Cluster Autoscaler has been updated to v1.2.0. (#60842, @mwielgus)
* Updates kube-dns to v1.14.8 (#57918, @rramkumar1)
* Influxdb is unchanged from v1.9: v1.3.3 (#53319)
* Grafana is unchanged from v1.9: v4.4.3 (#53319)
* CAdvisor is v0.29.1 (#60867)
* fluentd-gcp-scaler is v0.3.0 (#61269)
* Updated fluentd in fluentd-es-image to fluentd v1.1.0 (#58525, @monotek)
* fluentd-elasticsearch is v2.0.4 (#58525)
* Updated fluentd-gcp to v3.0.0. (#60722)
* Ingress glbc is v1.0.0 (#61302)
* OIDC authentication is coreos/go-oidc v2 (#58544)
* Updated fluentd-gcp updated to v2.0.11. (#56927, @x13n)
* Calico has been updated to v2.6.7 (#59130, @caseydavenport)

(更多…)

在K8S 的ingress上配置HTTP认证的方法

在K8S 的ingress上配置HTTP认证的方法如下:

1 . 使用htpasswd创建一个auth文件:

htpasswd -c ./auth myusername
cat auth
myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0

2. 创建一个K8S的secret:

kubectl create secret generic mysecret --from-file auth --namespace=monitoring 
kubectl --namespace=monitoring get secret mysecret 
NAME      TYPE    DATA    AGE 
mysecret Opaque   1      106d

3. 通过以下参数将创建的secret与ingress关联起来:

  • ingress.kubernetes.io/auth-type: "basic"
  • ingress.kubernetes.io/auth-secret: "mysecret"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
 name: prometheus-dashboard
 namespace: monitoring
 annotations:
   kubernetes.io/ingress.class: traefik
   ingress.kubernetes.io/auth-type: "basic"
   ingress.kubernetes.io/auth-secret: "mysecret"
spec:
 rules:
 - host: dashboard.prometheus.example.com
   http:
     paths:
     - backend:
         serviceName: prometheus
         servicePort: 9090

最后创建就可以了:

kubectl create -f prometheus-ingress.yaml -n monitoring

参考:

https://docs.traefik.io/user-guide/kubernetes/

https://docs.traefik.io/configuration/backends/kubernetes/

苹果无线键盘在windows 10上使用

家里有一个苹果无线键盘,就拿到公司来使用,公司的电脑是Dell 笔记本,安装的windows 10系统。通过在设置-》设备-》蓝牙, 添加蓝牙或其它设备,找到苹果的无线键盘,然后在键盘上输入认证数字、回车后成功连接。

但是在用的时候发现一个问题,有一些特殊按键不能使用,特别是FN键不起作用了,严重影响效率,因为苹果键盘没有HOME和END键,需要使用FN键结合光标键来完成HOME和END键的功能。

猜想是没有驱动的原故,而苹果没有公布windows 的驱动程序,比较难搞,后来想到也许可以从苹果的BOOTCAMP上动动脑子,BOOTCAMP居然能让用户安装windows 10系统,那肯定是有驱动程序的。去苹果官网一看,发现BOOTCAMP只有5.1的版本下载,6.0以上的都没有WEB下载的入口了,而只有6.0以上的版本才支持windows 10。于是在网上找了一个imac BOOTCAMP 6.1.6237的版本试了试。发现居然可以。步骤如下: (更多…)

nginx使用map配置AB测试环境

通过使用nginx的map可以配置按照不同的值(如URL参数,POST值,cooke,http头等等…),来转到不用的后端主机,以实现简单的AB测试环境。

下面为一个配置的例子:

# underscores_in_headers on;
#当使用超过一个map时,下面两个参数需要调大,否则会报错
map_hash_max_size 262144;
map_hash_bucket_size 262144;

#定义3个upstream用于AB测试,用于显示不同的内容,相应下面有建3个虚拟主机端口与此对应
upstream def {
        server 127.0.0.1:8080;
}

upstream a {
        server 127.0.0.1:8081;
}

upstream b {
        server 127.0.0.1:8082;
}

# 定义map,$cookie_aid是指从cookie内取aid的值,当cookie内aid的值为"12345"时,$upserver为a,$upserver我这边是用的是upstrem的名称,其它类似
map $cookie_aid $upserver {
        default                          "def";
        "12345"                          "a";
        "67890"                          "b";
}
# 从$http_pid内取值,是从http header内取pid的的值,当http header内pid的值为"12345"时,$headerpid 为a,其它类似
map $http_pid $headerpid {
        default                          "def";
        "12345"                          "a";
        "67890"                          "b";
}
# 从$host内取值,$host对应的是用户访问的域名,当域名的值为12345.abtest-b.xxxxx.com时,$bserver 为a,其它类似
map $host $bserver {
		12345.abtest-b.xxxxx.com       "a";
		67890.abtest-b.xxxxx.com       "b";
		.abtest-b.xxxxx.com            "def";
		default                        "def";
}
# 从$arg_userid内取值,$arg_userid对应的是用户访问的URL上所带的参数userid的值,当userid的值为12345时,$userid为a,其它类似
map $arg_userid $userid {
        default                          "def";
        "12345"                          "a";
        "67890"                          "b";
}

server {
        listen 80;
        server_name abtest.xxxxx.com;
        charset utf-8;
        root /data/www/a/;
        access_log /data/log/nginx/abtest.internal.weimobdev.com_access.log main;
        error_log /data/log/nginx/abtest.internal.weimobdev.com_error.log info;

        location / {
                # 通过以下的if语句,实现多个变量的判断,当cookie内取aid的值判断完后,再判断http header内pid的值
                if ( $upserver = "def" ) {
                         set $upserver  $headerpid;
                }
                # 由于$upserver我这边是用的是upstrem的名称,所以可以直接转发
                proxy_pass http://$upserver;   
        }
}

server {
        listen 80;
        server_name abtest-b.xxxxx.com *.abtest-b.xxxxx.com;
        charset utf-8;
        root /data/www/a/;
        access_log /data/log/nginx/abtest-b.xxxxx.com_access.log main;
        error_log /data/log/nginx/abtest-b.xxxxx.com_error.log info;

        location / {
                # 通过以下的if语句,实现多个变量的判断,当$host内的域名判断完后,再判断URL上所带的参数userid的值
                if ( $bserver = "def" ) {
                         set $bserver $userid;
                }
                # 
                proxy_pass http://$bserver;
        }
}

server {
        listen 8080;
        charset utf-8;
        root /data/www/default/;
        access_log /data/log/nginx/default.log;
}

server {
        listen 8081;
        charset utf-8;
        root /data/www/a/;
        access_log /data/log/nginx/a.log;
}

server {
        listen 8082;
        charset utf-8;
        root /data/www/b/;
        access_log /data/log/nginx/b.log;
}

(更多…)

linux 使用curl命令访问url并模拟cookie

1.目录

linux下通过命令访问url的方式有多种,主要如下

2.1.elinks

elinks – lynx-like替代角色模式WWW的浏览器

例如: elinks –dump http://www.baidu.com

2.1.2.wget

这个会将访问的首页下载到本地

[root@el5-mq2 ~]# wget http://www.baidu.com

3.3.curl

curl会显示出源码

curl http://www.baidu.com/index.html

4.4.lynx

lynx http://www.baidu.com

5.5.curl使用实践

现在有个需求,因为服务器在收集访问数据,抓取cookie中的value,模拟url访问时需要带上cookie参数,curl命令刚好能完成这个功能。
首先查看帮助:
curl -h
 -b/–cookie <name=string/file> Cookie string or file to read cookies from (H)
 -c/–cookie-jar <file> Write cookies to this file after operation (H)
    –create-dirs   Create necessary local directory hierarchy
    –crlf          Convert LF to CRLF in upload
    –crlfile <file> Get a CRL list in PEM format from the given file
可以使用-b参数来完成,具体使用如下:
  curl –b “key1=val1;key2=val2;”
或直接使用文件
curl -b ./cookie.txt
编写测试实例:
curl -b “user_trace_token=20150518150621-02994ed9a0fb42d1906a56258e072fc4;LGUID=20150515135257-a33a769c-fac6-11e4-91ce-5254005c3644” http://10.10.26.164:1235/click?v=1&logtype=deliver&position=home_hot-0&orderid=10197777&userid=1942556&positionid=148&url=http%3a%2f%2fwww.lagou.com%2fjobs%2f317000.html%3fsource%3dhome_hot%26i%3dhome_hot-5&fromsite=http%3a%2f%2fwww.lagou.com%2fzhaopin%2fAndroid%3flabelWords%3dlabel%26utm_source%3dAD__baidu_pinzhuan%26utm_medium%3dsem%26utm_campaign%3dSEM&optime=2015-06-15_20:00:00
发现这样还是不可以,url附带的参数取不到。使用-d 参数传递url参数,使用-G 把请求方式配置为GET就OK了,如下:
curl -b “user_trace_token=20150518150621-02994ed9a0fb42d1906a56258e072fc4;LGUID=20150515135257-a33a769c-fac6-11e4-91ce-5254005c3644;LGSID=20150518150621-02994ed9a0fb42d1906a56258e072fc4;LGRID=20150617230732-4ea87972-1580-11e5-9a88-000c29653e90;” -d “v=1&logtype=deliver&position=i_home-1&orderid=10197777&userid=1942556&positionid=148&url=http%3a%2f%2fwww.lagou.com%2fjobs%2f317000.html%3fsource%3dhome_hot%26i%3dhome_hot-5&fromsite=http%3a%2f%2fwww.lagou.com%2fzhaopin%2fAndroid%3flabelWords%3dlabel%26utm_source%3dAD__baidu_pinzhuan%26utm_medium%3dsem%26utm_campaign%3dSEM&optime=2015-06-15_20:00:00”    -G  http://10.10.26.164:1235/click
想要获得response返回的cookie怎么办,使用’-c’参数,同时可以使用-b filename用文件方式表示cookie,配合-c使用更方便
可以先用-c 命令生成一个cookie文件作为模板,再修改这个文件作为-b 参数的文件名。
使用如下:
curl -b c1.txt -c c2.txt -d “v=1&_v=j31&a=406405635&t=pageview&_s=1&dr=http%3a%2f%2fwww.sogou.com%2ftuguang&dl=http%3A%2F%2Fwww.lagou.com%2F%3futm_source%3dad_sougou_pingzhuan&ul=zh-cn&de=UTF-8&dt=%E6%8B%89%E5%8B%BE%E7%BD%91-%E6%9C%80%E4%B8%93%E4%B8%9A%E7%9A%84%E4%BA%92%E8%81%94%E7%BD%91%E6%8B%9B%E8%81%98%E5%B9%B3%E5%8F%B0&sd=24-bit&sr=1600×900&vp=1583×291&je=1&fl=18.0%20r0&_u=MACAAAQBK~&jid=&cid=1312768212.1431333683&tid=UA-41268416-1&z=1204746223”    -G  http://192.168.52.130:1234/collect
生成的c2.txt内容如下:
# Netscape HTTP Cookie File
# http://curl.haxx.se/docs/http-cookies.html
# This file was generated by libcurl! Edit at your own risk.192.168.52.130 FALSE / FALSE 1757574737 user_trace_token 20150914151217-eedd019e-5aaf-11e5-8a69-000c29653e90
192.168.52.130 FALSE / FALSE 1442217595 LGSID 20150914152955-652a13c5-5ab2-11e5-846d-000c29653e90
192.168.52.130 FALSE / FALSE 1442217595 PRE_UTM
192.168.52.130 FALSE / FALSE 1442217595 PRE_HOST www.huxiu.com
192.168.52.130 FALSE / FALSE 1442217595 PRE_SITE http%3A%2F%2Fwww.huxiu.com%2Ftuguang
192.168.52.130 FALSE / FALSE 1442217595 PRE_LAND http%3A%2F%2Fwww.lagou.com%2F%3F
192.168.52.130 FALSE / FALSE 0 LGRID 20150914152955-652a1630-5ab2-11e5-846d-000c29653e90
192.168.52.130 FALSE / FALSE 1757574737 LGUID 20150914151217-eedd0624-5aaf-11e5-8a69-000c29653e90

转载:AI 数据 » linux 使用curl命令访问url并模拟cookie

通过ip sla+snmp方式对MSTP专线进行状态监控

MSTP专线由于是光纤通过光猫转换器转换成RJ45的电口到路由器,会导致一个问题,当链路上的光纤有中断时,接到路由器上的电口状态还是UP的,路由器对此无感知,这对我们的监控带来了困扰,因为通常对专线的监控都是监控接口的状态。

根据查询相关资料,监控的方式有以下几种:

  1. 光猫转换器换成光电联动的光猫转换器,这种光猫转换器,光口down后,电口也会down,相应的路由器上的电口也就down了。据说有这种,我没有用过
  2. 思科snmp协议有一个支持远程ping的OID,但是整个过程会很烦锁,当监控多条专线时,需要编程来处理。对这方面有兴趣的可参考:http://www.cnblogs.com/cunshen/articles/163987.html 以及 http://www.cisco.com/c/en/us/support/docs/ip/simple-network-management-protocol-snmp/13383-21.html#examp
  3. 使用路由器的ip sla与eem联动,原理是使用ip sla定时ping对方的接口地址,当发现对方的IP不通后,强制shutdown相应的专线接口。这种方式有一个问题,就是接口shutdown后,在运营商修复线路后,需要人工在设备上启动相应接口,不能自动总是不太好的,特别是节假日无人值班时。可参考http://www.zhaocs.info/sla_eem_1.html

(更多…)