3801-3850 of 10000 results (71ms)
2019-04-08 §
11:11 <fsero> upgrading envoy to 1.9.1 T215810 [production]
10:42 <jdrewniak@deploy1001> Synchronized portals: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 58s) [production]
10:41 <jdrewniak@deploy1001> Synchronized portals/wikipedia.org/assets: Wikimedia Portals Update: [[gerrit:502190| Bumping portals to master (T128546)]] (duration: 00m 59s) [production]
10:34 <moritzm> upgrading app servers in codfw to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
10:23 <jijiki> Running debdeploy to upgrade librsvg [production]
09:43 <gehel> force allocation of 3 unassigned shards on elasticsearch / cirrus / eqiad [production]
09:30 <arturo> T219776 puppet node clean labtestnet2003.codfw.wmnet [production]
09:19 <volans> restarting icinga on icinga1001 - T196336 [production]
08:45 <moritzm> upgrading API servers mw1221-mw1235 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
08:34 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:34 <akosiaris@deploy1001> scap-helm zotero cluster staging completed [production]
08:34 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] [production]
08:32 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:32 <akosiaris@deploy1001> scap-helm zotero cluster eqiad completed [production]
08:32 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] [production]
08:32 <akosiaris> lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present [production]
08:31 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:31 <akosiaris@deploy1001> scap-helm zotero cluster codfw completed [production]
08:31 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-codfw.yaml production stable/zotero [namespace: zotero, clusters: codfw] [production]
08:17 <godog> delete fundraising folder from public grafana - T219825 [production]
08:01 <godog> bounce grafana after https://gerrit.wikimedia.org/r/c/operations/puppet/+/501519 [production]
07:59 <moritzm> upgrading mw1266-mw1275 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:59 <moritzm> upgrading mw1266-mw1255 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T217453 (duration: 00m 58s) [production]
07:19 <marostegui> Deploy schema change on the first 10 wikis - T217453 [production]
07:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T217453 (duration: 00m 59s) [production]
07:02 <moritzm> installing wget security updates [production]
07:02 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T143763 (duration: 00m 58s) [production]
06:34 <_joe_> restarted netbox, SIGSEGV on HUP-induced reload [production]
05:20 <marostegui> Deploy schema change on x1 master with replication, there will be lag on x1 slaves T143763 [production]
05:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T219777 T143763 (duration: 01m 30s) [production]
2019-04-07 §
11:34 <volans|off> restarted icinga on icinga2001 [production]
06:34 <oblivian@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=zotero,name=codfw [production]
06:23 <_joe_> deleting zotero pods with high memory watermark in codfw [production]
06:03 <oblivian@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=zotero,name=codfw [production]
2019-04-06 §
10:09 <gilles> Purging ruwiki namespaces > 0 [production]
2019-04-05 §
23:10 <thcipriani> revert some recent problematic gerrit acl changes [production]
22:46 <chaomodus> restarted pdfrender on scb1002 T174916 [production]
21:45 <hashar> thcipriani restarted Gerrit. CI works again # T220243 [production]
21:37 <thcipriani> restarting gerrit [production]
21:29 <hashar> CI / Zuul is no more processing events / T220243 [production]
17:29 <thcipriani> gerrit back on 2.15.11 [production]
17:27 <thcipriani> restart gerrit [production]
17:26 <thcipriani@deploy1001> Finished deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) (duration: 00m 11s) [production]
17:26 <thcipriani@deploy1001> Started deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) [production]
17:25 <thcipriani@deploy1001> Finished deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 (on gerrit2001 only) (duration: 00m 10s) [production]
17:25 <thcipriani@deploy1001> Started deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 (on gerrit2001 only) [production]
17:19 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.23/includes/diff/TextSlotDiffRenderer.php: Ia326c67de28a4e / T220217 (duration: 01m 02s) [production]
17:12 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.24/includes/diff/TextSlotDiffRenderer.php: Ia326c67de28a4e / T220217 (duration: 01m 00s) [production]
16:02 <krinkle@deploy1001> Synchronized php-1.33.0-wmf.24/includes/jobqueue/jobs/RefreshLinksJob.php: Ib1ac31365f9c / T220037 (duration: 00m 59s) [production]