2014-06-09
§
|
23:58 |
<manybubbles> |
lied - upgrading elastic1014 |
[production] |
23:57 |
<manybubbles> |
upgrading elastic1015 |
[production] |
23:30 |
<Krinkle> |
Reloading Zuul to deploy 6727b8b |
[production] |
23:12 |
<maxsem> |
Synchronized php-1.24wmf8/extensions/MobileApp: (no message) (duration: 00m 03s) |
[production] |
23:11 |
<maxsem> |
Synchronized php-1.24wmf7/extensions/MobileApp: (no message) (duration: 00m 03s) |
[production] |
23:03 |
<maxsem> |
Synchronized wmf-config/InitialiseSettings.php: https://bugzilla.wikimedia.org/66377 (duration: 00m 04s) |
[production] |
20:42 |
<manybubbles> |
upgraded elastic1007-elastic1010 without issue - starting elastic1010 |
[production] |
20:08 |
<subbu> |
deployed Parsoid 9b673587 (deploy sha 7d0097a1) |
[production] |
19:23 |
<ottomata> |
disabling puppet on analytics1012 |
[production] |
19:00 |
<ottomata> |
decomissioning analytics1012 in hadoop cluster, this will become a Kafka broker |
[production] |
17:59 |
<manybubbles> |
elastic1004-1006 upgraded without trouble - cluster is working on filling elatic1006 before moving on to 1007, and the rest |
[production] |
17:04 |
<andrewbogott> |
switching labs to puppet3 |
[production] |
17:03 |
<awight> |
update crm from b38497a9d0ef75fe2b20b03b649ac13a5e3f47a7 to b6815d29de97b80a0ab65db576213a604f0c7cb9 |
[production] |
16:30 |
<manybubbles> |
upgrading elastic1003 - upgrade is going well so far so I'm going to stop watching it as closely and let it be more automated |
[production] |
15:28 |
<manybubbles> |
elastic1001 went well, doing 1002 by hand again |
[production] |
15:17 |
<anomie> |
Synchronized php-1.24wmf8/extensions/Wikidata: SWAT: Wikidata entity suggester bug fixes [[gerrit:138339]] (duration: 00m 16s) |
[production] |
15:12 |
<greg-g> |
mw1151 still "permission denied" during deploys |
[production] |
15:12 |
<anomie> |
Synchronized wmf-config/InitialiseSettings.php: SWAT: Enable TemplateData GUI on Portuguese Wikipedia [[gerrit:137986]] (duration: 00m 14s) |
[production] |
15:09 |
<anomie> |
Synchronized php-1.24wmf7/extensions/VisualEditor/modules/ve-mw/ui/dialogs/ve.ui.MWSaveDialog.js: SWAT: VE fix for focus regression [[gerrit:137978]] (duration: 00m 15s) |
[production] |
15:06 |
<andrewbogott> |
beta updating all instances to puppet 3 via a cherry-pick of https://gerrit.wikimedia.org/r/#/c/137898/ on deployment-salt |
[production] |
15:05 |
<anomie> |
Synchronized php-1.24wmf8/extensions/VisualEditor/modules/ve-mw/: SWAT: VE fix for focus regression and alignment issues [[gerrit:137971]] [[gerrit:138122]] (duration: 00m 14s) |
[production] |
15:01 |
<manybubbles> |
successfully synced plugins, upgrading elastic1001 to make sure everything is working ok with it - then we'll run through the others more quickly |
[production] |
14:57 |
<manybubbles> |
syncing elasticsearch plugins for 1.2.1 - any elasticsearch restart from here on out needs to come with 1.2.1 or the node will break. |
[production] |
14:54 |
<manybubbles> |
starting Elasticsearch upgrade with elastic1001 |
[production] |
07:14 |
<springle> |
disabled puppet on analytics1021 to avoid kafka broker restarting with missing mount |
[production] |
05:15 |
<springle> |
xtrabackup clone db1046 to db1020 |
[production] |
04:44 |
<springle> |
umount /dev/sdf on analytics1021, fs in r/o mode, kafka broker not running. no checks yet |
[production] |
03:24 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Mon Jun 9 03:23:05 UTC 2014 (duration 23m 4s) |
[production] |
02:29 |
<LocalisationUpdate> |
completed (1.24wmf8) at 2014-06-09 02:28:08+00:00 |
[production] |
02:15 |
<LocalisationUpdate> |
completed (1.24wmf7) at 2014-06-09 02:14:46+00:00 |
[production] |
2014-06-07
§
|
23:48 |
<hoo> |
Fixed four CentralAuth log entries on meta which were logged for WikiSets/0 |
[production] |
21:36 |
<manybubbles> |
that means I turned off puppet and shut down Elasticsearch on elastic1017 - you can expect the cluster to go yellow for half an hour or so while the other nodes take rebuild the redundency that elastic1017 had |
[production] |
21:35 |
<manybubbles> |
after consulting logs - elastic1017 has had high io wait since it was deployed - I'm taking it out of rotation |
[production] |
21:31 |
<manybubbles> |
elastic1017 is sick - thrashing to death on io - restarting Elasticsearch to see if it recovers unthrashed |
[production] |
17:56 |
<godog> |
restarted ES on elastic1017.eqiad.wmnet (at 17:22 UTC) |
[production] |
03:24 |
<LocalisationUpdate> |
ResourceLoader cache refresh completed at Sat Jun 7 03:23:32 UTC 2014 (duration 23m 31s) |
[production] |
02:31 |
<LocalisationUpdate> |
completed (1.24wmf8) at 2014-06-07 02:29:57+00:00 |
[production] |
02:17 |
<LocalisationUpdate> |
completed (1.24wmf7) at 2014-06-07 02:16:30+00:00 |
[production] |
2014-06-06
§
|
23:51 |
<Krinkle> |
Restarted Jenkins, force stopped Zuul, started Zuul, configure Jenkins via web interface (disable Gearman, save, enable German); Seems to be back up now, finally. |
[production] |
22:52 |
<mutante> |
same for rhenium, titanium, bast1001, calcium, carbon, ytterbium, stat1003 |
[production] |
22:43 |
<RoanKattouw> |
Restarting Jenkins didn't help, jobs still aren't making it across from Zuul into Jenkins |
[production] |
22:36 |
<RoanKattouw> |
Restarting stuck Jenkins |
[production] |
22:35 |
<mutante> |
same for holmium, hafnium, silver, netmon1001, magnesium, neon, antimony |
[production] |
22:17 |
<mutante> |
upgraded ssl packages on zirconium |
[production] |
21:57 |
<Krinkle> |
Took Jenkins slave on gallium temporarily offline and back online to resolve possible stagnation |
[production] |