1301-1350 of 10000 results (57ms)
2019-01-17 §
11:56 <mvolz@deploy1001> scap-helm zotero finished [production]
11:56 <mvolz@deploy1001> scap-helm zotero cluster staging completed [production]
11:56 <mvolz@deploy1001> scap-helm zotero upgrade staging -f zotero-values-staging.yaml --version=0.0.1 stable/zotero [namespace: zotero, clusters: staging] [production]
11:55 <arturo> T209527 copy nfsd-ldap between jessie-wikimedia and stretch-wikimedia in reprepro. It will require a rebuild though bc updated build-deps/deps [production]
11:55 <mvolz@deploy1001> scap-helm zotero upgrade staging -f zotero-values-staging.yaml stable/zotero [namespace: zotero, clusters: staging] [production]
11:43 <marostegui> Poweroff db1082 db1081 db1080 db1079 db1075 db1074 es1012 es1011 - T213748 [production]
11:36 <mvolz@deploy1001> scap-helm zotero finished [production]
11:36 <mvolz@deploy1001> scap-helm zotero cluster codfw completed [production]
11:36 <mvolz@deploy1001> scap-helm zotero upgrade production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] [production]
11:16 <onimisionipe> shutdown elastic103[0-5] to prepare for T213859 [production]
11:09 <elukey> stop eventlogging on eventlog1002 and eventlogging replication on db1108 as prep step for db1107 maintenance [production]
10:55 <marostegui> Lag will be generated on labs due to maintenance on sanitarium db masters [production]
10:54 <marostegui> Stop MySQL on db1082 db1081 db1080 db1079 db1075 db1074 es1012 es1011 - T213748 [production]
10:54 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool DBs on A2 rack T213748 (duration: 00m 54s) [production]
10:39 <moritzm> installing libcaca security updates [production]
10:30 <arturo> T213859 icinga downtime cloudservices1004 for 1 day [production]
10:29 <moritzm> installing ruby-loofah security updates [production]
10:09 <marostegui> Stop MySQL on db1103:3312 and db1103:3314, also poweroff the server - T213859 [production]
10:08 <moritzm> installing krb5 security updates on trusty [production]
10:04 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1103 - T213859 (duration: 00m 53s) [production]
09:59 <marostegui> Poweroff dbproxy1001 dbproxy1002 dbproxy1003 for a3 maintenance - T213859 [production]
09:25 <marostegui> Poweroff dbstore1003 for hw maintenance T213859 [production]
09:24 <moritzm> power off graphite1003 for later hw maintenance (T213859) [production]
09:18 <marostegui> Deploy schema change on db1095:3313 - T85757 [production]
09:02 <vgutierrez> rolling NIC firmware upgrade cp[1081-1090] - T203194 [production]
08:42 <jijiki> Enabling puppet on rdb1005 and switch redis::misc::master to rdb1006 - T213859 [production]
08:37 <moritzm> installing remaining systemd security updates on stretch [production]
08:32 <jijiki> Restarting nutcracker on scb100* for 484572 - T213859 [production]
08:32 <jynus> stop, upgrade and restart db1075 [production]
08:31 <marostegui> Deploy schema change on s3 codfw, lag will be generated - T85757 [production]
08:28 <marostegui> Drop table tag_summary from enwiki - T212255 [production]
08:24 <jijiki> Disabling puppet on rdb1005 and switch redis::misc::master to rdb1006 - T213859 [production]
07:43 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Increase weight for db1123 (duration: 00m 53s) [production]
07:20 <marostegui> Change thread_pool_stall_limit on db1075 and db1078 - T213858 [production]
07:18 <marostegui> Enable GTID on db1075 - T213858 [production]
07:04 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Remove s3 ready only T213858 (duration: 00m 30s) [production]
07:03 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Switchover s3master eqiad from db1075 to db1078 T213858 (duration: 00m 30s) [production]
07:01 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Set s3 on read-only T213858 (duration: 00m 31s) [production]
07:00 <marostegui> Start s3 failover T213858 [production]
06:30 <marostegui> Disable puppet on db1075 and db1078 - T213858 [production]
06:26 <marostegui> Enable GTID back on all hosts but db1075 db1078 - T213858 [production]
06:19 <marostegui> Change s3 topology to get ready for s3 failover - T213858 [production]
06:14 <marostegui> Disable gtid on s3 hosts - T213858 [production]
06:10 <marostegui> Downtime s3 hosts for 2 hours - T213858 [production]
04:12 <ppchelko@deploy1001> Finished deploy [mobileapps/deploy@89c4d8d]: revert new summary (duration: 01m 55s) [production]
04:10 <ppchelko@deploy1001> Started deploy [mobileapps/deploy@89c4d8d]: revert new summary [production]
04:02 <cdanis@deploy1001> Started restart [parsoid/deploy@4b82683]: (no justification provided) [production]
2019-01-16 §
23:25 <ppchelko@deploy1001> Finished deploy [recommendation-api/deploy@0ff39e2]: Deployment attempt with decreased worker count (duration: 04m 08s) [production]
23:21 <ppchelko@deploy1001> Started deploy [recommendation-api/deploy@0ff39e2]: Deployment attempt with decreased worker count [production]
23:10 <Krinkle> krinkle@tungsten:/srv/: rm -rf xhprof; for T196406 [production]