201-250 of 10000 results (26ms)
2017-12-04 §
09:24 <elukey> reboot analytics104* (hadoop worker nodes) for kernel+jvm updates - T179943 [production]
09:19 <jynus> rebooting mariadb at labsdb1005 [production]
09:12 <moritzm> reimaging mw1259 (video scaler) to stretch, will be kept disabled initially (some controlled live tests following) [production]
08:57 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1096:3315 and 3316 - T178359 (duration: 00m 45s) [production]
08:45 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1096:3316 - T178359 (duration: 00m 45s) [production]
08:44 <moritzm> updating tor on radium to 0.3.1.9 [production]
08:41 <moritzm> updating tor packages to 0.3.1.9 [production]
08:30 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Increase traffic for db1096:3315 and pool db1096:3316 - T178359 (duration: 00m 45s) [production]
08:12 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Pool db1096:3315 - T178359 (duration: 00m 44s) [production]
08:11 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Pool db1096:3315 - T178359 (duration: 00m 45s) [production]
07:53 <moritzm> installing curl security updates [production]
07:17 <marostegui> Compress s1 on db1099 - T178359 [production]
07:08 <marostegui> Stop MySQL on db1044 as it will be decommissioned - T181696 [production]
07:05 <_joe_> playing with puppetdb status for ores2003 (deactivating/reactivating node) [production]
06:40 <marostegui> Stop MySQL on db1098 to clone db1096.s6 - T178359 [production]
06:39 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Remove db1044 from config as it will be decommissioned - T181696 (duration: 00m 45s) [production]
06:38 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Remove db1044 from config as it will be decommissioned - T181696 (duration: 00m 45s) [production]
06:34 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Depool db1098 - T178359 (duration: 00m 46s) [production]
06:21 <marostegui> Deploy alter table on s3 master (db1075) without replication - T174569 [production]
02:32 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.10) (duration: 06m 28s) [production]
2017-12-03 §
15:33 <ejegg> disabled CiviCRM bounce processing job [production]
12:17 <akosiaris> empty ganeti1006, it had issues this morning per T181121 [production]
12:06 <marostegui> Fix dbstore1002 replication [production]
07:44 <akosiaris> ran puppet on conf2002, etcdmirror-conftool-eqiad-wmnet got started again [production]
05:11 <andrewbogott> deleting files on labsdb1003 /srv/tmp older than 30 days [production]
03:57 <no_justification> gerrit2001: icinga is flapping on the gerrit process/systemd check, but this is kind of known (not sure why it's doing this all of a sudden). It's not letting me acknowledge it, but it's fine/harmless. Cf T176532 [production]
2017-12-02 §
17:55 <marostegui> Reboot db1096.s5 to pick up the correct innodb_buffer_pool size after finishing compressing s5 - T178359 [production]
03:51 <hoo> Ran "scap pull" on snapshot1001, after final T181385 tests [production]
00:03 <mutante> tried one more time on db2028,db2029, both trusty. on db2028: gmond was running as user ganglia-monitor, failed, had to manually kill the process, run puppet again then ok. on db2029, gmond was running as "499" but puppet just ran and removed it without manual intervention. (T177225) [production]
2017-12-01 §
23:15 <urandom> starting cassandra bootstrap, restbase1012-b - T179422 [production]
21:49 <mutante> db2029 - removing ganglia-monitor, testing to kill gmond, running puppet to figure out how to cleanly remove it on trusty [production]
21:12 <mutante> db2023 killed gmond (ganglia-monitor) process manually which was still running even though ganglia-monitor package was removed and caused puppet breakage (it seems only on trusty). after that puppet run is clean again and ganglia removed. (T177225) (https://gerrit.wikimedia.org/r/#/c/394647/1) [production]
20:18 <awight@tin> Started deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores100* [production]
20:17 <awight@tin> Finished deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores1001 (duration: 02m 31s) [production]
20:15 <awight@tin> Started deploy [ores/deploy@9afbf14]: (non-production) Test ORES deployment to ores1001 [production]
20:03 <aaron@tin> Synchronized php-1.31.0-wmf.10/includes/libs/objectcache/WANObjectCache.php: f096d0b465b75d - temp logging for statsd spam (duration: 00m 45s) [production]
18:59 <demon@tin> Synchronized wmf-config/CommonSettings-labs.php: no-op (duration: 00m 46s) [production]
18:22 <mutante> Phabricator: restarting Apache for php-curl update [production]
18:21 <_joe_> restarting apache2 on the codfw puppetmasters [production]
18:06 <marktraceur@tin> Synchronized php-1.31.0-wmf.10/extensions/UploadWizard/resources/controller/uw.controller.Deed.js: (no justification provided) (duration: 00m 46s) [production]
17:49 <mutante> phab2001 - restarted apache [production]
17:33 <herron> stopped ircecho on einsteinium [production]
17:00 <awight@tin> Unlocked for deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (duration: 00m 14s) [production]
17:00 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: 16666666666m 39s) [production]
17:00 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: -1m 59s) [production]
16:59 <awight@tin> Unlocked for deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (duration: 00m 07s) [production]
16:59 <awight@tin> Locking from deployment [ores/deploy]: Don't deploy while we're messing with git-lfs (planned duration: 60m 00s) [production]
16:34 <jynus> stopping db2092 to clone s1 to db2085 [production]
16:24 <urandom> starting cassandra bootstrap, restbase1012-a -- T179422 [production]
15:27 <godog> bounce uwsgi on labmon1001 - stuck [production]