5201-5250 of 10000 results (71ms)
2019-04-08 §
12:03 <arturo> T219776 puppet node deactivate labtestnet2003.codfw.wmnet [production]
12:00 <hashar> contint1001 upgraded zuul to 2.5.1-wmf6 # T208426 [production]
11:53 <hoo@deploy1001> Synchronized wmf-config/Wikibase.php: WikibaseClient: Conditionally enable mapframe support (T218051) (duration: 00m 58s) [production]
11:48 <hashar> contint2001: stopping zuul-server , it is not meant to be running there [production]
11:41 <hoo@deploy1001> Synchronized wmf-config/abusefilter.php: Enable blocking feature of AbuseFilter in zh.wikipedia (T210364) (duration: 00m 58s) [production]
11:25 <hoo@deploy1001> Synchronized wmf-config/InitialiseSettings.php: Create uploader user group for thwiki (T216615) (duration: 00m 58s) [production]
11:12 <jijiki> Restarted thumbor services after librsvg upgrade [production]
11:11 <fsero> upgrading envoy to 1.9.1 T215810 [production]
10:42 <jdrewniak@deploy1001> Synchronized portals: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 58s) [production]
10:41 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 59s) [production]
10:34 <moritzm> upgrading app servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
10:23 <jijiki> Running debdeploy to upgrade librsvg [production]
09:43 <gehel> force allocation of 3 unassigned shards on elasticsearch / cirrus / eqiad [production]
09:30 <arturo> T219776 puppet node clean labtestnet2003.codfw.wmnet [production]
09:19 <volans> restarting icinga on icinga1001 - T196336 [production]
08:45 <moritzm> upgrading API servers mw1221-mw1235 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
08:34 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:34 <akosiaris@deploy1001> scap-helm zotero cluster staging completed [production]
08:34 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] [production]
08:32 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:32 <akosiaris@deploy1001> scap-helm zotero cluster eqiad completed [production]
08:32 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] [production]
08:32 <akosiaris> lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present [production]
08:31 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:31 <akosiaris@deploy1001> scap-helm zotero cluster codfw completed [production]
08:31 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-codfw.yaml production stable/zotero [namespace: zotero, clusters: codfw] [production]
08:17 <godog> delete fundraising folder from public grafana - T219825 [production]
08:01 <godog> bounce grafana after https://gerrit.wikimedia.org/r/c/operations/puppet/+/501519 [production]
07:59 <moritzm> upgrading mw1266-mw1275 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:59 <moritzm> upgrading mw1266-mw1255 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T217453 (duration: 00m 58s) [production]
07:19 <marostegui> Deploy schema change on the first 10 wikis - T217453 [production]
07:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T217453 (duration: 00m 59s) [production]
07:02 <moritzm> installing wget security updates [production]
07:02 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T143763 (duration: 00m 58s) [production]
06:34 <_joe_> restarted netbox, SIGSEGV on HUP-induced reload [production]
05:20 <marostegui> Deploy schema change on x1 master with replication, there will be lag on x1 slaves T143763 [production]
05:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T219777 T143763 (duration: 01m 30s) [production]
2019-04-07 §
11:34 <volans|off> restarted icinga on icinga2001 [production]
06:34 <oblivian@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=zotero,name=codfw [production]
06:23 <_joe_> deleting zotero pods with high memory watermark in codfw [production]
06:03 <oblivian@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=zotero,name=codfw [production]
2019-04-06 §
10:09 <gilles> Purging ruwiki namespaces > 0 [production]
2019-04-05 §
23:10 <thcipriani> revert some recent problematic gerrit acl changes [production]
22:46 <chaomodus> restarted pdfrender on scb1002 T174916 [production]
21:45 <hashar> thcipriani restarted Gerrit. CI works again # T220243 [production]
21:37 <thcipriani> restarting gerrit [production]
21:29 <hashar> CI / Zuul is no more processing events / T220243 [production]
17:29 <thcipriani> gerrit back on 2.15.11 [production]
17:27 <thcipriani> restart gerrit [production]