5101-5150 of 10000 results (41ms)
2017-08-07 §
17:05 <gehel@tin> Finished deploy [wdqs/wdqs@da33919]: (no justification provided) (duration: 02m 28s) [production]
17:02 <gehel@tin> Started deploy [wdqs/wdqs@da33919]: (no justification provided) [production]
16:51 <valhallasw`cloud> restarted webservice on grid engine -- the git pull exec does not work on kubernetes [tools.wikibugs]
16:44 <marostegui> Restart s7 instance on db1069 to pick up new replication filters - T172693 [production]
16:37 <XioNoX> manually restarted varnish on cp1099 [production]
15:11 <thcipriani> restarting jenkins for plugin update [releng]
15:10 <thcipriani> restarting jenkins for plugin upgrade [production]
14:54 <andrewbogott> deleting proofreadpage.wikisource-dev.eqiad.wmflabs and removing project [wikisource-dev]
14:51 <gehel> reducing elasticsearch eqiad concurrent rebalance to 4 (from 8) [production]
14:38 <elukey> updated librdkafka1 and ++1 to 0.9.4.1 on hafnium [production]
14:35 <Amir1> 1a6e0e5 went to prod [wikilabels]
14:32 <mutante> phab2001 - stopping Apache,schedule downtime for http and puppet [production]
14:28 <Amir1> 1a6e0e5 is going to staging [wikilabels]
14:22 <herron> mx[1,2]001, fermium: Installed libmail-dkim-perl and restarted spamassassin service - T172689 [production]
14:09 <andrewbogott> deleted etcd-k8s-CTEST and k8s-master-CTEST [toolsbeta]
13:43 <Amir1> deploy e56b8e2 to prod [wikilabels]
13:37 <Amir1> deploy e56b8e2 to staging [wikilabels]
13:15 <jynus> reboot db1098 [production]
12:39 <_joe_> restarting pdfrender on scb1001, T159922 [production]
12:39 <elukey> restart kafka on kafka1018 to force it out of the kafka topic leaders - T172681 [production]
12:26 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2074 - T171321 (duration: 00m 45s) [production]
12:08 <gehel> deploying https://gerrit.wikimedia.org/r/#/c/299825/ - some logs will be lost during logstash restart [production]
10:02 <marostegui> Add dbstore2002:3313 to tendril - T171321 [production]
09:47 <jynus> stopping db1050's mysql and cloning it to db1089 [production]
09:06 <elukey> set net.netfilter.nf_conntrack_tcp_timeout_time_wait=65 (was 120) on all the analytics kafka brokers - T136094 [production]
09:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2065 after fixing: linter, page and watchlist tables (duration: 00m 47s) [production]
08:12 <marostegui> Force BBU re-learn on db1016 - T166344 [production]
07:02 <marostegui> Stop replication on db2065 to reimport: page, linter and watchlist tables [production]
07:02 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2065 to reimport: page, linter and watchlist tables (duration: 00m 47s) [production]
06:38 <marostegui> Stop MySQL on db2074 - T171321 [production]
06:37 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2074 - T171321 (duration: 00m 46s) [production]
06:33 <marostegui> Stop replication on db2075 - T170662 [production]
06:27 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2073 - T171321 (duration: 00m 47s) [production]
06:20 <marostegui> Force BBU re-learn on db1016 - T166344 [production]
04:14 <bd808> Edited public_html/index.html for T172660 [tools.masscamps]
04:13 <bd808> Edited public_html/index.html for T172660 [tools.massviews]
02:57 <l10nupdate@tin> ResourceLoader cache refresh completed at Mon Aug 7 02:57:42 UTC 2017 (duration 6m 42s) [production]
02:51 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.12) (duration: 07m 56s) [production]
02:30 <l10nupdate@tin> scap sync-l10n completed (1.30.0-wmf.11) (duration: 10m 16s) [production]
00:35 <bd808> Deployed bc82317 [tools.admin]
2017-08-06 §
13:28 <TabbyCat> Ran mwscript extensions/WikimediaMaintenance/dumpInterwiki.php deploymentwiki on the beta cluster [releng]
13:17 <elukey> powercycle mw2256 - com2 frozen - T163346 [production]
13:13 <elukey> restart pdfrender on scb1002 [production]
11:02 <elukey> stop yarn on analytics1034 to reload the tg3 driver - T172633 [analytics]
06:18 <ebernhardson@tin> Synchronized wmf-config/PoolCounterSettings.php: T169498: Reduce cirrus search pool counter to 200 parallel requests cluster wide (duration: 02m 54s) [production]
05:31 <bd808> Restarted webservice as requested on irc [tools.templatetransclusioncheck]
05:30 <bd808> Restarted webservice as requested on irc [tools.templatecount]
03:42 <bd808> Test [tools.stewardbots]
03:41 <zhuyifei1999_> upgrading youtube_dl on encoding0{1..3} & frontend, from version 2017.6.18 [video]
01:28 <chasemp> conf2002:~# service etcdmirror-conftool-eqiad-wmnet restart (not sure what else to do the service failed) [production]