1551-1600 of 10000 results (49ms)
2020-01-11 §
00:05 <mutante> s/cloudsearch/codesearch/g [devtools]
00:04 <mutante> creating throwaway instance "cloudsearch" [devtools]
00:04 <mutante> deleting instance deploy1001 (buster), creating deploy-1002 (stretch) instead [devtools]
2020-01-10 §
23:31 <bstorm_> updated toollabs-webservice package to 0.56 [tools]
23:22 <James_F> Zuul: Restore extension-coverage to projects with …-codehealth T242444 [releng]
22:33 <mutante> ms-be1026 sudo systemctl reset-failed (failed Session 372989 of user debmonitor) [production]
21:05 <bstorm_> updated toollabs-webservice package to 0.55 for testing [toolsbeta]
20:45 <jeh> cloudcontrol200[13]-dev schedule downtime until Feb 28 2020 on systemd service check T242462 [production]
20:29 <jeh> cloudmetrics100[12] schedule downtime until Feb 28 2020 on prometheus check T242460 [production]
20:03 <urandom> drop legacy Parsoid/JS storage keyspaces, production env -- T242344 [production]
19:56 <otto@deploy1001> helmfile [EQIAD] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . [production]
19:54 <otto@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . [production]
19:52 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-main' for release 'main' . [production]
19:51 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-analytics' for release 'analytics' . [production]
19:48 <mutante> LDAP - add Zbyszko Papierski to "wmf" group (T242341) [production]
19:47 <mutante> LDAP - add Hugh Nowlan to "wmf" group (T242309) [production]
19:42 <dcausse> restarting blazegraph on wdqs1005 [production]
19:40 <ebernhardson> restart mjolnir-kafka-bulk-daemon across eqiad and codfw search clusters [production]
19:40 <ebernhardson@deploy1001> Finished deploy [search/mjolnir/deploy@e141941]: repair model upload in bulk daemon (duration: 05m 02s) [production]
19:35 <ebernhardson@deploy1001> Started deploy [search/mjolnir/deploy@e141941]: repair model upload in bulk daemon [production]
19:13 <otto@deploy1001> helmfile [STAGING] Ran 'apply' command on namespace 'eventgate-logging-external' for release 'logging-external' . [production]
18:53 <mutante> welcome new (restbase) service deployer Clara Andrew-Wani (T242152) [production]
18:35 <bd808> Restarted zuul on contint1001 at 18:29Z; no logs since 2020-01-10 17:55:28,452 [releng]
18:29 <bd808> Restarted zuul on contint1001; no logs since 2020-01-10 17:55:28,452 [production]
16:15 <bstorm_> restarted the tool to clean up loads of uninterruptible and zombie perl processes [tools.ftl]
15:45 <bstorm_> depooled tools-paws-worker-1013 to reboot because I think it is the last tools server with that mount issue (I hope) [tools]
15:35 <bstorm_> depooling and rebooting tools-worker-1016 because it still had the leftover mount problems [tools]
15:30 <bstorm_> git stash-ing local puppet changes in hopes that arturo has that material locally, and it doesn't break anything to do so [tools]
14:30 <elukey> restart oozie with new settings to instruct it to pick up spark-defaults.conf settings from /etc/spark2/conf [analytics]
13:27 <arturo> cloudvirt1009: virsh undefine i-000069b6. This is tools-elastic-01 which is running on cloudvirt1008 (so, leaked on cloudvirt1009) [admin]
13:04 <arturo> cyberbot-db-01 is now on cloudvirt1029 [cyberbot]
11:48 <moritzm> stop/mask nginx on hassium/hassaleh T224567 [production]
10:56 <akosiaris> repool mathoid codfw for testing canary support in the mathoid helm chart [production]
10:56 <akosiaris@cumin1001> conftool action : set/pooled=true; selector: name=codfw,dnsdisc=mathoid [production]
10:51 <akosiaris@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'mathoid' for release 'canary' . [production]
10:51 <akosiaris@deploy1001> helmfile [CODFW] Ran 'apply' command on namespace 'mathoid' for release 'production' . [production]
10:40 <akosiaris@deploy1001> helmfile [STAGING] Ran 'sync' command on namespace 'mathoid' for release 'staging' . [production]
10:38 <akosiaris> depool mathoid codfw in preparation for testing canary support in the mathoid helm chart [production]
10:37 <akosiaris@cumin1001> conftool action : set/pooled=false; selector: name=codfw,dnsdisc=mathoid [production]
10:24 <moritzm> rename Ganeti group for esams from "default" to "row_OE" T236216 [production]
10:21 <moritzm> rename Ganeti group for eqsin from "default" to "row_1" T228099 [production]
09:46 <arturo> moving cyberbot-db-01 from cloudvirt1009 to cloudvirt1029 to try a faster hypervisor [cyberbot]
09:34 <legoktm> shutdown upgrader-05 instance [library-upgrader]
09:02 <marostegui> Remove revision partitions from db2091:3312 [production]
09:01 <marostegui@cumin1001> dbctl commit (dc=all): 'Depoool db2091:3312', diff saved to https://phabricator.wikimedia.org/P10113 and previous config saved to /var/cache/conftool/dbconfig/20200110-090143-marostegui.json [production]
08:59 <marostegui@cumin1001> dbctl commit (dc=all): 'Pool db2088:3312', diff saved to https://phabricator.wikimedia.org/P10112 and previous config saved to /var/cache/conftool/dbconfig/20200110-085921-marostegui.json [production]
08:55 <vgutierrez> restarting pybal on lvs3005 (high-traffic1) - T242321 [production]
08:51 <vgutierrez> restarting pybal on lvs3007 - T242321 [production]
08:48 <vgutierrez@puppetmaster1001> conftool action : set/pooled=yes; selector: service=nginx,name=ncredir3002.esams.wmnet [production]
08:48 <vgutierrez@puppetmaster1001> conftool action : set/pooled=yes; selector: service=nginx,name=ncredir3001.esams.wmnet [production]