1801-1850 of 10000 results (63ms)
2018-04-11 §
08:18 <jynus> rerunning eqiad misc backups [production]
08:03 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Repool db2069 as candidate master for x1 - T191275 (duration: 01m 03s) [production]
07:45 <ema> cp2022: restart varnish-be due to child process crash https://phabricator.wikimedia.org/P6979 T191229 [production]
07:27 <marostegui> Stop MySQL on db2033 to copy its data away before reimaging - T191275 [production]
07:08 <vgutierrez> Reimaging lvs5003.eqsin as stretch (2nd attempt) [production]
06:49 <elukey> restart Yarn Resource Manager daemons on analytics100[12] to pick up the new Prometheus configuration file [production]
06:20 <marostegui> Stop MySQL on db2033 to clone db2069 - T191275 [production]
06:17 <marostegui@tin> Synchronized wmf-config/db-eqiad.php: Add db2069 to the config as depooled x1 slave - T191275 (duration: 01m 03s) [production]
06:15 <marostegui@tin> Synchronized wmf-config/db-codfw.php: Add db2069 to the config as depooled x1 slave - T191275 (duration: 01m 01s) [production]
05:28 <Krinkle> manual coal back-fill still running with the normal coal disabled via systemd. Will restore normal coal when I wake up. [production]
05:22 <marostegui> Deploy schema change on codfw s8 master (db2045) with replication enabled (this will generate lag on codfw) - T187089 T185128 T153182 [production]
05:17 <marostegui> Reload haproxy on dbprox1010 to repool labsdb1010 [production]
02:36 <l10nupdate@tin> scap sync-l10n completed (1.31.0-wmf.28) (duration: 05m 41s) [production]
00:12 <bstorm_> Updated views and indexes on labsdb1011 [production]
2018-04-10 §
23:32 <XioNoX> depolled eqsin due to router issue [production]
23:04 <Krinkle> Seemingly from 22:53 - 23:03 global traffic dropped by 30-60%, presumably due to issues in eqiad where 10 Gbits dropped to 3 Gbits sharper than ever before. [production]
22:49 <joal@tin> Finished deploy [analytics/refinery@33448cd]: Deploying fixes after todays deploy errors (duration: 04m 46s) [production]
22:45 <joal@tin> Started deploy [analytics/refinery@33448cd]: Deploying fixes after todays deploy errors [production]
21:18 <sbisson@tin> Finished deploy [kartotherian/deploy@8f3a903]: Rollback kartotherian to v0.0.35 (duration: 06m 27s) [production]
21:12 <sbisson@tin> Started deploy [kartotherian/deploy@8f3a903]: Rollback kartotherian to v0.0.35 [production]
20:41 <sbisson@tin> Finished deploy [kartotherian/deploy@bdf70ed]: Deploying kartotherian pre-i18n everywhere (downgrade snapshot) (duration: 03m 45s) [production]
20:37 <sbisson@tin> Started deploy [kartotherian/deploy@bdf70ed]: Deploying kartotherian pre-i18n everywhere (downgrade snapshot) [production]
20:30 <mutante> deploy1001 - reinstalled with stretch - re-adding to puppet (T175288) [production]
20:30 <mutante> deploy1001 - reinstalled with jessie - re-adding to puppet (T175288) [production]
20:13 <urandom> increasing sample change-prop sample rate to 20% (from 10) in dev environment -- T186751 [production]
20:06 <thcipriani@tin> rebuilt and synchronized wikiversions files: testwiki back to 1.31.0-wmf.28 [production]
20:02 <sbisson@tin> Finished deploy [kartotherian/deploy@6e4d666]: Deploying kartotherian pre-i18n everywhere (duration: 04m 34s) [production]
19:58 <sbisson@tin> Started deploy [kartotherian/deploy@6e4d666]: Deploying kartotherian pre-i18n everywhere [production]
19:57 <sbisson@tin> Finished deploy [tilerator/deploy@3326c14]: Deploying tilerator pre-i18n everywhere (duration: 00m 48s) [production]
19:56 <sbisson@tin> Started deploy [tilerator/deploy@3326c14]: Deploying tilerator pre-i18n everywhere [production]
19:48 <sbisson@tin> Finished deploy [tilerator/deploy@3326c14]: Deploying tilerator pre-i18n to maps-test* (duration: 00m 27s) [production]
19:48 <sbisson@tin> Started deploy [tilerator/deploy@3326c14]: Deploying tilerator pre-i18n to maps-test* [production]
19:16 <thcipriani@tin> Finished scap: testwiki to php-1.31.0-wmf.29 and rebuild l10n cache (duration: 66m 28s) [production]
18:10 <thcipriani@tin> Started scap: testwiki to php-1.31.0-wmf.29 and rebuild l10n cache [production]
18:07 <Krinkle> Stopping coal on graphite1001 to manually repopulate for T191239 [production]
18:04 <otto@tin> Finished deploy [analytics/refinery@b8ea97f]: refinery 0.0.60 - take 3 (duration: 04m 54s) [production]
17:59 <otto@tin> Started deploy [analytics/refinery@b8ea97f]: refinery 0.0.60 - take 3 [production]
17:58 <otto@tin> Finished deploy [analytics/refinery@b8ea97f]: refinery 0.0.60 - take 2 (duration: 01m 50s) [production]
17:56 <otto@tin> Started deploy [analytics/refinery@b8ea97f]: refinery 0.0.60 - take 2 [production]
17:56 <otto@tin> Started deploy [analytics/refinery@b8ea97f]: refinery 0.0.60 - take 2^ [production]
17:49 <joal@tin> Finished deploy [analytics/refinery@b8ea97f]: Analytics weekly deploy - Move to spark 2 (duration: 03m 55s) [production]
17:48 <joal@tin> (no justification provided) [production]
17:47 <joal@tin> (no justification provided) [production]
17:45 <joal@tin> Started deploy [analytics/refinery@b8ea97f]: Analytics weekly deploy - Move to spark 2 [production]
17:43 <chasemp> add static route to neutron poc instance range for codfw 172.16.128.0/21 [production]
17:22 <papaul> shutting down cp2022 for main board replacement [production]
17:20 <awight@tin> Finished deploy [ores/deploy@d35a1e6]: Test deploy virtualenv on ores1001, with logging and forced failure (duration: 02m 44s) [production]
17:17 <awight@tin> Started deploy [ores/deploy@d35a1e6]: Test deploy virtualenv on ores1001, with logging and forced failure [production]
17:07 <awight@tin> Finished deploy [ores/deploy@1e18fa6]: Test deploy virtualenv on ores1001, with logging (duration: 02m 28s) [production]
17:05 <awight@tin> Started deploy [ores/deploy@1e18fa6]: Test deploy virtualenv on ores1001, with logging [production]