2020-12-05
§
|
00:45 |
<ryankemper@cumin2001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
00:45 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
00:45 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
00:40 |
<Urbanecm> |
End of mwscript extensions/AbuseFilter/maintenance/updateVarDumps.php --wiki=$wiki --print-orphaned-records-to=/tmp/urbanecm/$wiki-orphaned.log --progress-markers > $wiki.log in a tmux at mwmaint1002 (wiki=eswiki; T246539) |
[production] |
00:35 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-reload |
[production] |
00:35 |
<ryankemper@cumin1001> |
END (FAIL) - Cookbook sre.wdqs.data-transfer (exit_code=99) |
[production] |
00:34 |
<ryankemper@cumin1001> |
START - Cookbook sre.wdqs.data-transfer |
[production] |
00:32 |
<ryankemper@cumin2001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:30 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:28 |
<ryankemper@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:27 |
<ryankemper@cumin2001> |
START - Cookbook sre.hosts.downtime |
[production] |
00:27 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
00:26 |
<ryankemper@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
00:17 |
<Urbanecm> |
deploy1001 stagging dir is DIRTY: /srv/mediawiki-staging (master u+1): last commit bce412514eadaa47dbede56c4b4918da492443ce, author Mukunda Modell (cc twentyafterfour) |
[production] |
00:09 |
<ryankemper> |
T269204 reimaging the following instances to debian buster: `wdqs1004`, `wdqs2001`, `wdqs1003` |
[production] |
2020-12-04
§
|
17:22 |
<Urbanecm> |
[urbanecm@mwmaint1002 ~/uploads]$ mwscript importImages.php --wiki=commonswiki --comment-ext=txt --user=Wilfredor . # T269452 |
[production] |
15:47 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
15:45 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
15:15 |
<akosiaris@deploy1001> |
helmfile [eqiad] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
15:14 |
<akosiaris@deploy1001> |
helmfile [codfw] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
15:14 |
<akosiaris@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . |
[production] |
13:38 |
<akosiaris@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'apertium' for release 'staging' . |
[production] |
13:38 |
<akosiaris@deploy1001> |
helmfile [staging] Ran 'sync' command on namespace 'apertium' for release 'production' . |
[production] |
13:07 |
<akosiaris> |
create apertium namespace on k8s clusters. T255672 |
[production] |
11:24 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
11:24 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
10:31 |
<jynus> |
setting db1133 as read-write for backup testing |
[production] |
10:28 |
<moritzm> |
resetting cumin-check-aliases.service on cumin* hosts |
[production] |
09:54 |
<aborrero@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
09:54 |
<aborrero@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
09:30 |
<moritzm> |
installing zsh security updates on stretch |
[production] |
09:26 |
<moritzm> |
installing mutt security updates |
[production] |
08:58 |
<moritzm> |
installing lxml security updates |
[production] |
07:09 |
<marostegui> |
Stop mysql on clouddb1016 to clone clouddb1020 T267090 |
[production] |
07:02 |
<marostegui> |
Increase pvs on db[1151-1155] T269324 T268742 |
[production] |
02:16 |
<eileen> |
civicrm revision changed from 913ccdfd2b to 5fa107d32a, config revision is ffe0a99133 |
[production] |
01:43 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
01:42 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
01:04 |
<ryankemper> |
T269406 https://grafana.wikimedia.org/d/000000305/maps-performances?viewPanel=11&orgId=1&var-cluster=maps1&from=1606827063027&to=1607043666975 shows that the normal daily dropoff in lag did not occur today, leading to the criticals. It's possible some sort of daily job has failed |
[production] |
00:16 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:14 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:14 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
00:12 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |
00:06 |
<andrew@cumin1001> |
END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) |
[production] |
00:04 |
<andrew@cumin1001> |
START - Cookbook sre.hosts.downtime |
[production] |