3301-3350 of 10000 results (78ms)
2019-04-08 §
08:34 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-staging.yaml --reset-values staging stable/zotero [namespace: zotero, clusters: staging] [production]
08:32 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:32 <akosiaris@deploy1001> scap-helm zotero cluster eqiad completed [production]
08:32 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-eqiad.yaml production stable/zotero [namespace: zotero, clusters: eqiad] [production]
08:32 <akosiaris> lower CPU, memory limits for zotero pods. Set 1 cpu, 700Mi. This should help the pods to recover faster in some cases. The old memory leak issues we used to have seem to be no longer present [production]
08:31 <akosiaris@deploy1001> scap-helm zotero finished [production]
08:31 <akosiaris@deploy1001> scap-helm zotero cluster codfw completed [production]
08:31 <akosiaris@deploy1001> scap-helm zotero upgrade -f zotero-values-codfw.yaml production stable/zotero [namespace: zotero, clusters: codfw] [production]
08:17 <godog> delete fundraising folder from public grafana - T219825 [production]
08:01 <godog> bounce grafana after https://gerrit.wikimedia.org/r/c/operations/puppet/+/501519 [production]
07:59 <moritzm> upgrading mw1266-mw1275 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:59 <moritzm> upgrading mw1266-mw1255 to HHVM 3.18.5+dfsg-1+wmf8+deb9u2 and wikidiff 1.8.1 (T203069) [production]
07:24 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T217453 (duration: 00m 58s) [production]
07:19 <marostegui> Deploy schema change on the first 10 wikis - T217453 [production]
07:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T217453 (duration: 00m 59s) [production]
07:02 <moritzm> installing wget security updates [production]
07:02 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool all slaves in x1 T143763 (duration: 00m 58s) [production]
06:34 <_joe_> restarted netbox, SIGSEGV on HUP-induced reload [production]
05:20 <marostegui> Deploy schema change on x1 master with replication, there will be lag on x1 slaves T143763 [production]
05:18 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool all slaves in x1 T219777 T143763 (duration: 01m 30s) [production]
01:17 <Cam11598> restarted cubbie, bots 21, 22, 23 and 24. [cvn]
2019-04-07 §
22:18 <andrewbogott> upgrade puppet-compiler version to 0.5.0 (via hiera setting on Horizon) for T219430 [puppet-diffs]
16:54 <zhuyifei1999_> tools-sgeexec-0928 unresponsive since around 22 UTC. No data on Graphite. Can't ssh in even as root. Hard rebooting via Horizon [tools]
14:37 <andrewbogott> restarting encoding01 [video]
11:34 <volans|off> restarted icinga on icinga2001 [production]
06:34 <oblivian@puppetmaster1001> conftool action : set/pooled=true; selector: dnsdisc=zotero,name=codfw [production]
06:23 <_joe_> deleting zotero pods with high memory watermark in codfw [production]
06:03 <oblivian@puppetmaster1001> conftool action : set/pooled=false; selector: dnsdisc=zotero,name=codfw [production]
01:06 <bstorm_> cleared E state from 6 queues [tools]
00:30 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/501793 [releng]
2019-04-06 §
10:09 <gilles> Purging ruwiki namespaces > 0 [production]
02:17 <legoktm> rebuilding npm-php image again https://gerrit.wikimedia.org/r/501848 [releng]
01:52 <legoktm> rebuilding npm-php image https://gerrit.wikimedia.org/r/501847 [releng]
01:10 <legoktm> deploying https://gerrit.wikimedia.org/r/501786 https://gerrit.wikimedia.org/r/501714 https://gerrit.wikimedia.org/r/501707 https://gerrit.wikimedia.org/r/501782 https://gerrit.wikimedia.org/r/501709 https://gerrit.wikimedia.org/r/500111 https://gerrit.wikimedia.org/r/500106 https://gerrit.wikimedia.org/r/500127 https://gerrit.wikimedia.org/r/500119 [releng]
00:53 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/501813 [releng]
2019-04-05 §
23:53 <Krinkle> Beta cluster puppetmaster is stalled behind origin/production as of 24 hours ago (57 patches behind) due to a local merge conflict [releng]
23:10 <thcipriani> revert some recent problematic gerrit acl changes [production]
22:46 <chaomodus> restarted pdfrender on scb1002 T174916 [production]
22:24 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/501789 [releng]
22:02 <legoktm> rebuilding mediawiki-phan docker image https://gerrit.wikimedia.org/r/501794 [releng]
21:45 <hashar> thcipriani restarted Gerrit. CI works again # T220243 [production]
21:37 <thcipriani> restarting gerrit [production]
21:29 <hashar> CI / Zuul is no more processing events / T220243 [production]
20:11 <thcipriani> updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/501465 [releng]
18:48 <zhuyifei1999_> checked out FETCH_HEAD on quarry-web-01 T209226 [quarry]
18:43 <zhuyifei1999_> applied 0001-SECURITY-escape-CSV-injections.patch on quarry-web-01 and restarted uwsgi T209226 [quarry]
17:29 <thcipriani> gerrit back on 2.15.11 [production]
17:27 <thcipriani> restart gerrit [production]
17:26 <thcipriani@deploy1001> Finished deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) (duration: 00m 11s) [production]
17:26 <thcipriani@deploy1001> Started deploy [gerrit/gerrit@a4e66d4]: Gerrit to back to 2.15.11 on cobalt (restart incoming) [production]