2151-2200 of 10000 results (28ms)
2021-07-27 §
00:04 <bstorm> deploy a version of the php3.7 web image that includes the python2 package with tag :testing T287421 [tools]
2021-07-26 §
23:37 <legoktm@deploy1002> Synchronized php-1.37.0-wmf.15/extensions/Score/includes/Score.php: Increase lilypond version cache TTL to 1 hour (duration: 00m 57s) [production]
23:04 <bstorm> deleting pod to restart tool after adding an attempted cgi config to $HOME/.lighttpd.conf T287421 [tools.parliamentdiagram]
22:44 <bstorm> deleting pod to restart tool [tools.parliamentdiagram]
20:54 <razzi> reran the failed workflow of cassandra-daily-wf-local_group_default_T_pageviews_per_article_flat-2021-7-25 [analytics]
20:45 <brennen> runner-1001: installed docker & gitlab-runner, registered runner-1001 to the gitlab instance for pipeline experimentation (T287279) [releng]
18:49 <wm-bot> <lucaswerkmeister> installed MinervaNeue and MobileFrontend a few hours ago, forgot to log it earlier (T287401) [tools.notwikilambda]
18:30 <cstone> SmashPig revision changed from be272c02ce to 020d4eccd4, [production]
18:10 <dduvall> reloading zuul to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/707879 [releng]
18:00 <dduvall> creating 2 new jenkins jobs to deploy https://gerrit.wikimedia.org/r/c/integration/config/+/707879 [releng]
17:41 <legoktm> ran `scap pull` and repooled mw2336.codfw.wmnet - T287394 [production]
17:41 <legoktm@cumin1001> conftool action : set/pooled=yes; selector: name=mw2336.codfw.wmnet [production]
17:40 <jynus@cumin1001> END (PASS) - Cookbook sre.hosts.downtime (exit_code=0) for 2:00:00 on dbprov1002.eqiad.wmnet with reason: REIMAGE [production]
17:38 <jynus@cumin1001> START - Cookbook sre.hosts.downtime for 2:00:00 on dbprov1002.eqiad.wmnet with reason: REIMAGE [production]
17:37 <bstorm> repooled the whole set of ingress workers after upgrades T280340 [tools]
16:37 <bstorm> removing tools-k8s-ingress-4 from active ingress nodes at the proxy T280340 [tools]
16:06 <legoktm> depooled mw2336.codfw.mwnet, mgmt is down too. T287394 [production]
16:04 <legoktm@cumin1001> conftool action : set/pooled=no; selector: name=mw2336.codfw.wmnet [production]
15:29 <hashar> Restarted gerrit replica on gerrit2001.wikimedia.org # T287122 [production]
15:24 <ladsgroup@deploy1002> Synchronized php-1.37.0-wmf.15/extensions/AbuseFilter/includes/AbuseFilterHooks.php: Backport: [[gerrit:707021|Don’t generate current content text twice]], Part II (duration: 01m 49s) [production]
15:21 <ladsgroup@deploy1002> Synchronized php-1.37.0-wmf.15/extensions/AbuseFilter/includes/VariableGenerator/RunVariableGenerator.php: Backport: [[gerrit:707021|Don’t generate current content text twice]], Part I (duration: 01m 50s) [production]
15:19 <topranks> Adding peering to AS139931 - Bangladesh Submarine Cable Company - at Equinix Singapore on cr3-eqsin [production]
14:42 <dcausse@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'rdf-streaming-updater' for release 'main' . [production]
14:39 <addshore> waited for redis, then started xmlrcsd and es2r as xmlrcs user per https://wikitech.wikimedia.org/wiki/XmlRcs#Maintainer_info [huggle]
14:38 <addshore> restarted xmlrcs.huggle.eqiad1.wikimedia.cloud [huggle]
13:42 <oblivian@deploy1002> helmfile [codfw] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
13:31 <dcaro> disabled diamond on the machines (T287350) [cloudvirt-canary]
13:31 <dcaro> disabled diamond on the machines (T287351) [cloudvirt-canary]
10:55 <ladsgroup@deploy1002> Synchronized wmf-config/InitialiseSettings.php: Disable DPL on ruwikinews (duration: 00m 27s) [production]
10:53 <ladsgroup@deploy1002> Scap failed!: 3/6 canaries failed their endpoint checks(https://en.wikipedia.org) [production]
10:52 <ladsgroup@deploy1002> Scap failed!: 2/6 canaries failed their endpoint checks(https://en.wikipedia.org) [production]
10:51 <jynus> deploying 10 second mw user query limit on s3 codfw replicas [production]
10:49 <marostegui@cumin1001> dbctl commit (dc=all): 'Fully repool db2149', diff saved to https://phabricator.wikimedia.org/P16895 and previous config saved to /var/cache/conftool/dbconfig/20210726-104953-marostegui.json [production]
10:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db2149', diff saved to https://phabricator.wikimedia.org/P16894 and previous config saved to /var/cache/conftool/dbconfig/20210726-104649-marostegui.json [production]
10:46 <marostegui@cumin1001> dbctl commit (dc=all): 'Slowly repool db2149', diff saved to https://phabricator.wikimedia.org/P16893 and previous config saved to /var/cache/conftool/dbconfig/20210726-104613-marostegui.json [production]
10:38 <marostegui@cumin1001> dbctl commit (dc=all): 'Depool db2149', diff saved to https://phabricator.wikimedia.org/P16892 and previous config saved to /var/cache/conftool/dbconfig/20210726-103847-marostegui.json [production]
10:33 <oblivian@deploy1002> helmfile [eqiad] Ran 'sync' command on namespace 'mwdebug' for release 'pinkunicorn' . [production]
09:55 <jgiannelos@deploy1002> helmfile [staging] Ran 'sync' command on namespace 'tegola-vector-tiles' for release 'main' . [production]
09:15 <XioNoX> rollback sampling for T286038 [production]
08:31 <jmm@cumin2002> END (PASS) - Cookbook sre.hosts.reboot-single (exit_code=0) for host sretest1002.eqiad.wmnet [production]
08:27 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host sretest1002.eqiad.wmnet [production]
08:26 <jmm@cumin2002> END (ERROR) - Cookbook sre.hosts.reboot-single (exit_code=97) for host sretest1001.eqiad.wmnet [production]
08:24 <hashar> jjb: update jobs to Quibble 1.0.1 # https://gerrit.wikimedia.org/r/c/integration/config/+/708038 [releng]
08:11 <jmm@cumin2002> START - Cookbook sre.hosts.reboot-single for host sretest1001.eqiad.wmnet [production]
07:18 <_joe_> docker-image prune on deneb T287222 [production]
07:17 <_joe_> manage-production-images prune on deneb, T287222 [production]
07:08 <marostegui> Optimize dewiki.logging in eqiad (there will be lag) [production]
06:39 <moritzm> installing krb5 security updates [production]
05:55 <Amir1> start cleaning up auto-review flagged revs logs in plwiki [production]
2021-07-25 §
16:09 <majavah> deleting ingress pod running on worker-6 to get it to re-appear in ingress-4 [paws]