2951-3000 of 10000 results (52ms)
2022-02-04 §
16:48 <jbond> update add new ferm package ferm_2.5.1-1+wmf11u2 [production]
16:38 <pt1979@cumin2002> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
16:35 <pt1979@cumin2002> START - Cookbook sre.dns.netbox [production]
16:05 <elukey> unmask prometheus-mysqld-exporter.service and clean up the old @analytics + wmf_auto_restart units (service+timer) not used anymore on an-coord100[12] [production]
14:25 <btullis@cumin1001> END (PASS) - Cookbook sre.aqs.roll-restart (exit_code=0) for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
14:18 <btullis@cumin1001> START - Cookbook sre.aqs.roll-restart for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. [production]
12:08 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reimage (exit_code=0) for host ganeti1020.eqiad.wmnet with OS buster [production]
11:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 100%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20174 and previous config saved to /var/cache/conftool/dbconfig/20220204-114117-root.json [production]
11:26 <marostegui@cumin1001> dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 75%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20173 and previous config saved to /var/cache/conftool/dbconfig/20220204-112613-root.json [production]
11:14 <jmm@cumin2002> START - Cookbook sre.hosts.reimage for host ganeti1020.eqiad.wmnet with OS buster [production]
11:13 <akosiaris@cumin1001> END (PASS) - Cookbook sre.dns.netbox (exit_code=0) [production]
11:11 <marostegui@cumin1001> dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 50%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20172 and previous config saved to /var/cache/conftool/dbconfig/20220204-111110-root.json [production]
11:07 <akosiaris@cumin1001> START - Cookbook sre.dns.netbox [production]
11:04 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove all special groups from s1 codfw T263127', diff saved to https://phabricator.wikimedia.org/P20171 and previous config saved to /var/cache/conftool/dbconfig/20220204-110427-marostegui.json [production]
10:56 <marostegui@cumin1001> dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 25%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20170 and previous config saved to /var/cache/conftool/dbconfig/20220204-105606-root.json [production]
10:41 <marostegui@cumin1001> dbctl commit (dc=all): 'db1096:3316 (re)pooling @ 10%: repooling after schema change', diff saved to https://phabricator.wikimedia.org/P20165 and previous config saved to /var/cache/conftool/dbconfig/20220204-104102-root.json [production]
10:40 <moritzm> rebalancing row A in ganeti/eqiad, all nodes of that row are now running Buster T296721 [production]
10:03 <jmm@cumin2002> END (FAIL) - Cookbook sre.ganeti.addnode (exit_code=99) for new host ganeti1008.eqiad.wmnet to ganeti01.svc.eqiad.wmnet [production]
10:02 <jmm@cumin2002> START - Cookbook sre.ganeti.addnode for new host ganeti1008.eqiad.wmnet to ganeti01.svc.eqiad.wmnet [production]
09:58 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host ganeti1008.eqiad.wmnet [production]
09:53 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host ganeti1008.eqiad.wmnet [production]
08:20 <marostegui@cumin1001> dbctl commit (dc=all): 'Remove watchlist group from s4 eqiad T263127', diff saved to https://phabricator.wikimedia.org/P20164 and previous config saved to /var/cache/conftool/dbconfig/20220204-082010-marostegui.json [production]
07:18 <elukey> `git checkout main.html` on miscweb1002:/srv/org/wikidata/query to avoid puppet corrective actions (and the host being listed in alarms) [production]
07:09 <elukey> cleanup wmf_auto_restart_prometheus-mysqld-exporter@analytics-meta on an-test-coord1001 and unmasked wmf_auto_restart_prometheus-mysqld-exporter (now used) [production]
07:03 <elukey> clean up wmf_auto_restart_prometheus-mysqld-exporter@matomo on matomo1002 (not used anymore, listed as failed) [production]
07:00 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db1096:3316 schema change', diff saved to https://phabricator.wikimedia.org/P20163 and previous config saved to /var/cache/conftool/dbconfig/20220204-070003-marostegui.json [production]
05:59 <legoktm> uploaded pygments 2.11.2 to apt.wm.o (T298399) [production]
02:48 <ryankemper@cumin1001> START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet [production]
02:42 <ryankemper@cumin1001> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=99) for hosts elastic2035.codfw.wmnet [production]
02:41 <ryankemper@cumin1001> START - Cookbook sre.hosts.decommission for hosts elastic2035.codfw.wmnet [production]
01:08 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
01:06 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
01:06 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
01:05 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
01:04 <brennen> for-real end of utc late backport & config window [production]
01:04 <brennen@deploy1002> Synchronized php-1.38.0-wmf.20/extensions/Thanks/modules/ext.thanks.flowthank.js: Backport: [[gerrit:759319|Correct attribute for flow thanks (T300831)]] (duration: 00m 49s) [production]
00:50 <brennen> reopening utc late backport window for [[gerrit:759319|Correct attribute for flow thanks (T300831)]] [production]
00:15 <mwdebug-deploy@deploy1002> helmfile [codfw] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
00:12 <cjming> end of UTC late backport & config window [production]
00:11 <cjming@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 49s) [production]
00:11 <mwdebug-deploy@deploy1002> helmfile [codfw] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
00:10 <mwdebug-deploy@deploy1002> helmfile [eqiad] DONE helmfile.d/services/mwdebug: sync on pinkunicorn [production]
00:10 <cjming@deploy1002> Synchronized static/images/mobile/copyright/: Config: [[gerrit:759560|Update icons, wordmark for test wikis (T299512)]] (duration: 00m 53s) [production]
00:09 <mwdebug-deploy@deploy1002> helmfile [eqiad] START helmfile.d/services/mwdebug: apply on pinkunicorn [production]
2022-02-03 §
23:34 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20159 and previous config saved to /var/cache/conftool/dbconfig/20220203-233447-marostegui.json [production]
23:19 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20158 and previous config saved to /var/cache/conftool/dbconfig/20220203-231942-marostegui.json [production]
23:15 <ryankemper> T294805 Added a silence on alerts.wikimedia.org for `CirrusSearchJVMGCOldPoolFlatlined` [production]
23:04 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318', diff saved to https://phabricator.wikimedia.org/P20157 and previous config saved to /var/cache/conftool/dbconfig/20220203-230437-marostegui.json [production]
22:49 <marostegui@cumin1001> dbctl commit (dc=all): 'Repooling after maintenance db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20156 and previous config saved to /var/cache/conftool/dbconfig/20220203-224933-marostegui.json [production]
22:39 <marostegui@cumin1001> dbctl commit (dc=all): 'Depooling db1101:3318 (T300402)', diff saved to https://phabricator.wikimedia.org/P20155 and previous config saved to /var/cache/conftool/dbconfig/20220203-223923-marostegui.json [production]