2301-2350 of 10000 results (34ms)
2017-12-04 §
06:38 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1044 from config as it will be decommissioned - T181696 (duration: 00m 45s) [production]
06:34 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1098 - T178359 (duration: 00m 46s) [production]
06:21 <marostegui> Deploy alter table on s3 master (db1075) without replication - T174569 [production]
02:32 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.10) (duration: 06m 28s) [production]
2017-12-03 §
15:33 <ejegg> disabled CiviCRM bounce processing job [production]
12:17 <akosiaris> empty ganeti1006, it had issues this morning per T181121 [production]
12:06 <marostegui> Fix dbstore1002 replication [production]
07:44 <akosiaris> ran puppet on conf2002, etcdmirror-conftool-eqiad-wmnet got started again [production]
05:11 <andrewbogott> deleting files on labsdb1003 /srv/tmp older than 30 days [production]
03:57 <no_justification> gerrit2001: icinga is flapping on the gerrit process/systemd check, but this is kind of known (not sure why it's doing this all of a sudden). It's not letting me acknowledge it, but it's fine/harmless. Cf T176532 [production]
2017-12-02 §
17:55 <marostegui> Reboot db1096.s5 to pick up the correct innodb_buffer_pool size after finishing compressing s5 - T178359 [production]
03:51 <hoo> Ran "scap pull" on snapshot1001, after final T181385 tests [production]
00:03 <mutante> tried one more time on db2028,db2029, both trusty. on db2028: gmond was running as user ganglia-monitor, failed, had to manually kill the process, run puppet again then ok. on db2029, gmond was running as "499" but puppet just ran and removed it without manual intervention. (T177225) [production]
2017-12-01 §
23:15 <urandom> starting cassandra bootstrap, restbase1012-b - T179422 [production]
21:49 <mutante> db2029 - removing ganglia-monitor, testing to kill gmond, running puppet to figure out how to cleanly remove it on trusty [production]
21:12 <mutante> db2023 killed gmond (ganglia-monitor) process manually which was still running even though ganglia-monitor package was removed and caused puppet breakage (it seems only on trusty). after that puppet run is clean again and ganglia removed. (T177225) (https://gerrit.wikimedia.org/r/#/c/394647/1) [production]
20:18 <awight@tin> Started deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores100* [production]
20:17 <awight@tin> Finished deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores1001 (duration: 02m 31s) [production]
20:15 <awight@tin> Started deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores1001 [production]
20:03 <aaron@tin> Synchronized php-1.31.0-wmf.10/includes/libs/objectcache/WANObjectCache.php: f096d0b465b75d - temp logging for statsd spam (duration: 00m 45s) [production]
18:59 <demon@tin> Synchronized wmf-config/CommonSettings-labs.php: no-op (duration: 00m 46s) [production]
18:22 <mutante> Phabricator: restarting Apache for php-curl update [production]
18:21 <_joe_> restarting apache2 on the codfw puppetmasters [production]
18:06 <marktraceur@tin> Synchronized php-1.31.0-wmf.10/extensions/UploadWizard/resources/controller/uw.controller.Deed.js: (no justification provided) (duration: 00m 46s) [production]
17:49 <mutante> phab2001 - restarted apache [production]
17:33 <herron> stopped ircecho on einsteinium [production]
17:00 <awight@tin> Unlocked for deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (duration: 00m 14s) [production]
17:00 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: 16666666666m 39s) [production]
17:00 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: -1m 59s) [production]
16:59 <awight@tin> Unlocked for deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (duration: 00m 07s) [production]
16:59 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: 60m 00s) [production]
16:34 <jynus> stopping db2092 to clone s1 to db2085 [production]
16:24 <urandom> starting cassandra bootstrap, restbase1012-a -- T179422 [production]
15:27 <godog> bounce uwsgi on labmon1001 - stuck [production]
15:21 <moritzm> installing nspr security updates on trusty [production]
15:17 <moritzm> installing ffmpeg security updates [production]
15:14 <gehel@tin> Finished deploy [kartotherian/deploy@df7ebff]: testing new kartotherian packaging on maps-test2003 (duration: 00m 20s) [production]
15:14 <jynus@tin> Synchronized wmf-config/db-codfw.php: Undeploy db2092, use db2085 for s1 (duration: 00m 45s) [production]
15:14 <gehel@tin> Started deploy [kartotherian/deploy@df7ebff]: testing new kartotherian packaging on maps-test2003 [production]
15:14 <moritzm> installing libxcursor security updates on trusty [production]
15:09 <jynus@tin> Synchronized wmf-config/db-eqiad.php: Undeploy db2092, use db2085 for s1 (duration: 00m 45s) [production]
14:24 <awight@tin> Finished deploy [ores/deploy@532bd0b]: (non-production) Update ORES on new cluster (duration: 02m 06s) [production]
14:22 <awight@tin> Started deploy [ores/deploy@532bd0b]: (non-production) Update ORES on new cluster [production]
14:22 <akosiaris> upload apertium-crh-tur_0.3.0~r83159-1+wmf1 to apt.wikimedia.org/jessie-wikimedia component main. T181465 [production]
14:10 <herron> cutting all puppet service records over to codfw puppet 4 masters [production]
12:44 <elukey> reboot druid1001 for kernel+jvm updates - T179943 [production]
12:11 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2055 (duration: 00m 45s) [production]
11:59 <jynus> restarting and upgrading mysql on labsdb1004 [production]
11:28 <jynus> upgrading and restarting dbstore2001 [production]
11:14 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Depool db2055 (duration: 00m 46s) [production]