851-900 of 10000 results (32ms)
2021-09-09 ยง
13:11 <mutante> planet1002 - re-enabling disabled puppet [production]
13:06 <jmm@cumin2002> END (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Muehlenhoff out of all services on: 2 hosts [production]
13:06 <jmm@cumin2002> START - Cookbook sre.idm.logout Logging Muehlenhoff out of all services on: 2 hosts [production]
13:05 <jmm@cumin2002> END (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Muehlenhoff out of all services on: 2 hosts [production]
13:05 <jmm@cumin2002> START - Cookbook sre.idm.logout Logging Muehlenhoff out of all services on: 2 hosts [production]
13:03 <jmm@cumin2002> END (PASS) - Cookbook sre.idm.logout (exit_code=0) Logging Muehlenhoff out of all services on: 2 hosts [production]
13:03 <jmm@cumin2002> START - Cookbook sre.idm.logout Logging Muehlenhoff out of all services on: 2 hosts [production]
13:00 <mwdebug-deploy@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
12:56 <mwdebug-deploy@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
10:49 <hnowlan@puppetmaster1001> conftool action : set/pooled=no; selector: name=maps1007.eqiad.wmnet [production]
10:48 <hnowlan@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 5:00:00 on maps1007.eqiad.wmnet with reason: Resyncing from master [production]
10:48 <hnowlan@cumin1001> START - Cookbook sre.hosts.downtime for 5:00:00 on maps1007.eqiad.wmnet with reason: Resyncing from master [production]
10:48 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes; selector: name=maps1007.eqiad.wmnet [production]
10:48 <hnowlan@puppetmaster1001> conftool action : set/pooled=yes; selector: name=maps1006.eqiad.wmnet [production]
10:47 <topranks> Removing peering to old IPs of AS139931 (BSCCL) at Equinix Singapore (cr3-eqsin). [production]
10:45 <topranks> Removing peering to AS24218 at Equinix Singapore (cr3-eqsin) - network no longer uses this ASN. [production]
10:22 <volans> upgrading spicerack on cumin1001 [production]
10:20 <volans@cumin2002> END (FAIL) - Cookbook sre.hosts.decommission (exit_code=1) for hosts mc1027.eqiad.wmnet [production]
10:10 <volans@cumin2002> START - Cookbook sre.hosts.decommission for hosts mc1027.eqiad.wmnet [production]
09:56 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host mx2002.wikimedia.org [production]
09:47 <volans@cumin2002> END (ERROR) - Cookbook sre.hosts.decommission (exit_code=97) for hosts mc1027.eqiad.wmnet [production]
09:46 <volans@cumin2002> START - Cookbook sre.hosts.decommission for hosts mc1027.eqiad.wmnet [production]
09:37 <godog> swift eqiad add ms-be10[64-67] with initial weight - T290546 [production]
09:19 <filippo@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=swift-ro,name=eqiad [production]
09:19 <filippo@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=swift,name=eqiad [production]
09:15 <volans> rebooting sretest1001 to test ipmi reboot via spicerack [production]
09:15 <volans@cumin2002> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 0:20:00 on sretest1001.eqiad.wmnet with reason: testing reboot via ipmi [production]
09:15 <volans@cumin2002> START - Cookbook sre.hosts.downtime for 0:20:00 on sretest1001.eqiad.wmnet with reason: testing reboot via ipmi [production]
09:13 <btullis@cumin1001> END (PASS) - Cookbook sre.aqs.roll-restart (exit_code=0) for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. - btullis@cumin1001 [production]
09:09 <btullis@cumin1001> START - Cookbook sre.aqs.roll-restart for AQS aqs cluster: Roll restart of all AQS's nodejs daemons. - btullis@cumin1001 [production]
08:59 <godog> move swift traffic fully to codfw to rebalance eqiad - T287539 [production]
08:59 <filippo@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=swift,name=codfw [production]
08:58 <filippo@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=swift-ro,name=codfw [production]
08:56 <volans> upgrading spicerack on cumin2002 to test the new release [production]
08:50 <volans> uploaded spicerack_0.0.59 to apt.wikimedia.org buster-wikimedia,bullseye-wikimedia [production]
08:23 <jelto> run ansible change 719041 on gitlab1001 [production]
08:13 <jelto> run ansible change 719041 on gitlab2001 [production]
07:07 <dzahn@cumin1001> END (PASS) - Cookbook sre.ganeti.makevm (exit_code=0) for new host durum1002.eqiad.wmnet [production]
06:47 <dzahn@cumin1001> START - Cookbook sre.ganeti.makevm for new host durum1002.eqiad.wmnet [production]
04:37 <ryankemper> [WDQS] Dispatched e-mail to the banned user agent (dailymotion) [production]
03:57 <ryankemper> [WDQS] Dispatched e-mail to WDQS public mailing list informing them the outage is over; all that's left is the e-mail to the banned UA [production]
03:47 <ryankemper> [WDQS] Restarting `wdqs-blazegraph` on `wdqs[2001-2008].codfw.wmnet`; if banning the dailymotion UA was sufficient then servers should come back up healthy and not drop back into deadlock [production]
03:43 <ryankemper> [WDQS] Running puppet agent on `wdqs[2001-2008].codfw.wmnet` to roll out https://gerrit.wikimedia.org/r/719753 [production]
03:29 <ryankemper> [WDQS] There's no clear indication of them being a culprit, but by far the most common user agent is a dailymotion VideocatalogTopic UA (see https://logstash.wikimedia.org/goto/51f238e9010d0220e5d33c6c210be93e) [production]
03:12 <bstorm> attempting to start replication on clouddb1017 s1 T290630 [production]
03:11 <bstorm> stopping and restarting mariadb on clouddb1017 s1 [production]
03:04 <ryankemper> [WDQS] Dispatched email to Wikidata public mailing list about reduced service availability [production]
02:36 <ryankemper> [WDQS] https://grafana.wikimedia.org/d/000000489/wikidata-query-service?viewPanel=7&orgId=1&from=1631152574841&to=1631154942992 shows the availability pattern, anywhere we see missing data (null) represents time that blazegraph was locked up and therefore unable to report metrics [production]
02:34 <ryankemper> [WDQS] For context I glanced at `ryankemper@cumin1001:~$ sudo -E cumin 'P{wdqs2*}' 'sudo systemctl status wdqs-blazegraph'` before doing the aforementioned restarts and they'd all last restarted between 25-28 minutes ago [production]
02:33 <ryankemper> [WDQS] Restarting `wdqs-blazegraph` across all of `wdqs2*` [production]