2019-01-16
ยง
|
12:12 |
<zfilipin@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: [[gerrit:476884|Enable Partial Blocks on itwiki (T210444)]] (duration: 00m 53s) |
[production] |
12:12 |
<jynus> |
upgrade and restart db1095 |
[production] |
11:02 |
<fsero> |
draining kubernetes1001 for maintenance T213859 |
[production] |
10:59 |
<addshore> |
slot done |
[production] |
10:59 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: wgWBQualityConstraintsEnableConstraintsCheckJobs false (duration: 00m 51s) |
[production] |
10:53 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: wgWBQualityConstraintsEnableConstraintsCheckJobs true wd (duration: 00m 52s) |
[production] |
10:48 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: wgWBQualityConstraintsEnableConstraintsCheckJobs true testwd (duration: 00m 52s) |
[production] |
10:38 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: wikidatawiki, wgWBQualityConstraintsEnableConstraintsCheckJobsRatio 1% T204031 [[gerrit:484621]] (duration: 00m 52s) |
[production] |
10:28 |
<godog> |
restart rsyslog on wezen, tls listener stuck |
[production] |
10:25 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Repool db1077 with low load (duration: 00m 51s) |
[production] |
10:19 |
<elukey> |
executed kafka preferred-replica-election on the logging Kafka cluster as attempt to spread load more uniformly |
[production] |
10:19 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: testwikidatawiki, wgWBQualityConstraintsEnableConstraintsCheckJobsRatio 100 T204031 [[gerrit:484621]] (duration: 00m 52s) |
[production] |
10:18 |
<addshore@deploy1001> |
sync-file aborted: testwikidatawiki, wgWBQualityConstraintsEnableConstraintsCheckJobsRatio 100 T204031 [[gerrit:484621]] (duration: 00m 02s) |
[production] |
10:14 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: testwikidatawiki, wgWBQualityConstraintsEnableConstraintsCheckJobsRatio 50 T204031 [[gerrit:484621]] (duration: 00m 52s) |
[production] |
10:13 |
<addshore@deploy1001> |
sync-file aborted: testwikidatawiki, wgWBQualityConstraintsEnableConstraintsCheckJobsRatio 50 T204031 [[gerrit:484621]] (duration: 00m 00s) |
[production] |
10:03 |
<addshore@deploy1001> |
Synchronized wmf-config/InitialiseSettings-labs.php: BETA ONLY, [[gerrit:484621]] (duration: 00m 52s) |
[production] |
09:52 |
<godog> |
upgrade controller firmware on ms-be1016 - T213856 |
[production] |
09:47 |
<jynus> |
upgrade and restart db1077 |
[production] |
09:42 |
<jynus@deploy1001> |
Synchronized wmf-config/db-eqiad.php: Depool db1077 (duration: 00m 52s) |
[production] |
09:29 |
<marostegui> |
Stop s3 actor-migration script in order to allow s3 to catch up and to avoid lag during the failover - T188327 T213858 |
[production] |
09:17 |
<godog> |
powercycle ms-be1016 - T213856 |
[production] |
09:16 |
<marostegui> |
Stop replication in sync on dbstore1002:x1 and db2034 - T213670 |
[production] |
09:10 |
<dcausse> |
T210381: elasticsearch search cluster, creating completion suggester indices on psi&omega elastic instances in eqiad&codfw |
[production] |
09:00 |
<godog> |
test roll-restart rsyslog on mw hosts in eqiad - T211124 |
[production] |
08:58 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:58 |
<akosiaris@deploy1001> |
scap-helm zotero cluster eqiad completed |
[production] |
08:58 |
<akosiaris@deploy1001> |
scap-helm zotero install -n production -f zotero-values-eqiad.yaml stable/zotero [namespace: zotero, clusters: eqiad] |
[production] |
08:57 |
<marostegui> |
Re-point m3-master from dbproxy1003 to dbproxy1008 - T213865 |
[production] |
08:53 |
<moritzm> |
installing systemd security updates for stretch |
[production] |
08:53 |
<akosiaris> |
depool zotero eqiad for helm release cleanup |
[production] |
08:47 |
<akosiaris> |
repool zotero in codfw |
[production] |
08:42 |
<filippo@deploy1001> |
Synchronized wmf-config/InitialiseSettings.php: Default to new logging infrastructure - T211124 (duration: 01m 05s) |
[production] |
08:40 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:40 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
08:40 |
<akosiaris@deploy1001> |
scap-helm zotero upgrade production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] |
[production] |
08:30 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:30 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
08:30 |
<akosiaris@deploy1001> |
scap-helm zotero install -n production -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] |
[production] |
08:25 |
<akosiaris@deploy1001> |
scap-helm zotero finished |
[production] |
08:25 |
<akosiaris@deploy1001> |
scap-helm zotero cluster codfw completed |
[production] |
08:25 |
<akosiaris@deploy1001> |
scap-helm zotero install -f zotero-values-codfw.yaml stable/zotero [namespace: zotero, clusters: codfw] |
[production] |
08:24 |
<marostegui> |
Drop table tag_summary from s4 - T212255 |
[production] |
08:19 |
<elukey> |
convert aria tables to innodb on dbstore1002 - T213706 |
[production] |
08:18 |
<akosiaris> |
depool codfw zotero for helm release cleanups |
[production] |
08:15 |
<marostegui> |
Upgrade MySQL on db2043 (s3 codfw master) |
[production] |
08:11 |
<elukey> |
drop unneeded tables from the staging db on dbstore1002 according to T212493#4883535 |
[production] |
07:36 |
<vgutierrez> |
powercycling cp1088 - T203194 |
[production] |
07:27 |
<marostegui> |
Drop table tag_summary from s2 - T212255 |
[production] |
07:14 |
<marostegui> |
Upgrade MySQL on db2050 and db2036 |
[production] |
06:07 |
<SMalyshev> |
started transfer wdqs2005->2006 |
[production] |