551-600 of 10000 results (16ms)
2020-06-05 §
13:33 <jayme@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'mathoid' for release 'staging' . [production]
13:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'cxserver' for release 'staging' . [production]
13:19 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
13:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'citoid' for release 'staging' . [production]
13:18 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
13:15 <elukey@cumin1001> START - Cookbook sre.hosts.downtime [production]
12:55 <ladsgroup@deploy1001> Synchronized wmf-config/interwiki.php: Hotfix for be-tarask interwiki link being broken (T111853) (duration: 01m 00s) [production]
12:41 <mutante> rebooting gerrit1002 to add more vCPUs, after [ganeti1009:~] $ sudo gnt-instance modify -B vcpus=8 gerrit1002.wikimedia.org T239151 [production]
12:20 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'zotero' for release 'staging' . [production]
12:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'wikifeeds' for release 'staging' . [production]
12:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'cxserver' for release 'staging' . [production]
12:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'citoid' for release 'staging' . [production]
12:19 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop-jobqueue' for release 'staging' . [production]
12:17 <akosiaris> update blubberoid changeprop changeprop-jobqueue citoid cxserver wikifeeds zotero in staging to latest charts [production]
12:17 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'changeprop' for release 'staging' . [production]
12:17 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'blubberoid' for release 'staging' . [production]
12:17 <akosiaris> fix typo in ganeti2016 /etc/network/interfaces and reboot [production]
11:28 <akosiaris> master-failover from ganeti2001 to ganeti2019 for ganeti01.svc.codfw.wmnet [production]
11:25 <akosiaris@deploy1001> helmfile [EQIAD] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . [production]
11:25 <akosiaris@deploy1001> helmfile [CODFW] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . [production]
11:25 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'kube-system' for release 'calico-policy-controller' . [production]
11:14 <mutante> running puppet on all ganeti nodes [production]
11:05 <ladsgroup@deploy1001> Synchronized wmf-config/interwiki.php: Update interwiki cache (duration: 02m 14s) [production]
10:32 <elukey@cumin1001> END (PASS) - Cookbook sre.cassandra.roll-restart (exit_code=0) [production]
10:11 <elukey@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
10:02 <dzahn@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) [production]
09:49 <jayme@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'mathoid' for release 'staging' . [production]
09:46 <elukey@cumin1001> START - Cookbook sre.ganeti.makevm [production]
09:25 <elukey@cumin1001> START - Cookbook sre.cassandra.roll-restart [production]
09:03 <akosiaris@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
09:00 <akosiaris@cumin1001> START - Cookbook sre.hosts.downtime [production]
08:44 <akosiaris> reimage ganeti2016 for stretch [production]
08:42 <akosiaris> migrate mx2001.wikimedia.org to new ganeti nodes [production]
08:40 <akosiaris> migrate acrab to new ganeti nodes [production]
08:38 <akosiaris> failover master IP from ganeti1003 to ganeti1009 [production]
08:37 <akosiaris> empty ganeti100{1,2,3,4}. Move all VMs to new ganeti nodes [production]
08:28 <akosiaris> migrate seaborgium.wikimedia.org to new ganeti nodes [production]
08:27 <akosiaris> migrate etherpad1002 to new ganeti nodes [production]
08:11 <marostegui> Upgrade db2075 to 10.1.45 [production]
07:52 <vgutierrez> rolling restart of ats-tls - T249335 [production]
07:20 <dzahn@cumin1001> START - Cookbook sre.ganeti.makevm [production]
06:20 <elukey@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) [production]
06:17 <elukey@cumin1001> START - Cookbook sre.hosts.downtime [production]
2020-06-04 §
23:45 <catrope@deploy1001> Synchronized wmf-config/mc.php: Set coalesceKeys=non-global for WANCache on enwiki (duration: 00m 59s) [production]
23:29 <catrope@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Enable Minerva site notices on Wikivoyage wiis (T254391) (duration: 00m 58s) [production]
23:19 <catrope@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Set guwiki timezone to Asia/Kolkata (T253827) (duration: 00m 57s) [production]
23:17 <catrope@deploy1001> Synchronized static/images/: Change logo for zhwiki (T254467) (duration: 01m 00s) [production]
22:56 <ryankemper> re-enabled puppet on `cloudelastic1006`. All `cloudelastic` instances now have puppet enabled and are in sync [production]
20:56 <ryankemper> enabled puppet on `cloudelastic1005` in order to kick off a puppet run and verify that this new node joins the ES cluster properly [production]
20:39 <ryankemper> disabled puppet on `cloudelastic100[5,6]` which are two racked nodes that we are now bringing into service. Will re-enable after successful puppet-merge / elasticsearch cluster join [production]