1301-1350 of 10000 results (40ms)
2019-02-11 §
10:01 <elukey> restart superset to pick up new config.py changes [analytics]
09:38 <marostegui> Stop all mysql instances on dbstore1005 for reboot [production]
09:11 <marostegui> Stop all mysql instances on dbstore1003 for reboot [production]
08:38 <elukey> restart superset to pick up new settings in config.py [analytics]
08:17 <moritzm> removed cloudcontrol2001-dev.codfw.wmnet from debmonitor (actual hostname in use is cloudcontrol2001-dev.wikimedia.org) [production]
08:07 <marostegui> Deploy schema change on s8 codfw master (db2045) - this will generate lag on codfw T210713 [production]
07:43 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Fully repool db1100 (duration: 00m 46s) [production]
07:39 <marostegui> Deploy schema change on s7 primary master (db1062) - T210713 [production]
07:27 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Give api traffic to db1100 (duration: 00m 46s) [production]
07:18 <marostegui> Stop all mysql instances on dbstore1004 for a reboot [production]
07:17 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1100 with low weight (duration: 00m 46s) [production]
07:06 <marostegui> Upgrade MySQL on db1100 [production]
07:06 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1100 for mysql upgrade (duration: 00m 47s) [production]
07:00 <marostegui> Restart icinga on icinga1001 - checks went awol [production]
06:51 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Repool db1079 (duration: 00m 48s) [production]
06:14 <marostegui@deploy1001> Synchronized wmf-config/db-eqiad.php: Depool db1079 (duration: 00m 48s) [production]
06:14 <marostegui@deploy1001> sync-file aborted: Depool db0179 (duration: 00m 01s) [production]
04:23 <TimStarling> on mwmaint1002: running normalizeThrottleParameters.php --dry-run on all wikis (T209565) [production]
04:19 <tstarling@deploy1001> Synchronized php-1.33.0-wmf.16/extensions/AbuseFilter/maintenance/normalizeThrottleParameters.php: maintenance script update for new dry run (duration: 00m 47s) [production]
04:19 <tstarling@deploy1001> Synchronized php-1.33.0-wmf.16/extensions/WikimediaEvents/tests/phpunit/PageViewsTest.php: test-only undeployed change (duration: 00m 46s) [production]
04:18 <tstarling@deploy1001> Synchronized php-1.33.0-wmf.16/extensions/NavigationTiming/tests/ext.navigationTiming.test.js: test-only undeployed change (duration: 00m 51s) [production]
04:10 <tstarling@deploy1001> sync-file aborted: test-only undeployed change (duration: 00m 12s) [production]
03:06 <Reedy> graceful restart of zuul as no jobs were running [releng]
03:05 <kartik@deploy1001> Finished deploy [cxserver/deploy@ee4a15a]: Update cxserver to 8928852 (T213256) (duration: 04m 08s) [production]
03:00 <kartik@deploy1001> Started deploy [cxserver/deploy@ee4a15a]: Update cxserver to 8928852 (T213256) [production]
00:27 <bd808> Restarted webservice. Looks like someone found a slow IP to look up, handed that URL to a bunch of folks, and eventually locked up all the lighttpd threads with the same slow query [tools.whois]
2019-02-10 §
20:07 <bd808> Deploy 5f20413 Make labels for legacy Trusty grid explicit (T215712) [tools.admin]
19:59 <volans|off> force rebooting mw1299, stuck again - T215569 [production]
19:16 <volans|off> forcing reboot of icinga1001 because it's stuck again (no ping, no ssh, CPU stuck messages on console) - T214760 [production]
10:52 <elukey> re-run webrequest upload webrequest-load-wf-upload-2019-2-10-0 [analytics]
10:52 <elukey> killed oozie job related to webrequest-load-wf-upload-2019-2-10-0, seemed stuck in generate_sequence_statistics (not really clear why) [analytics]
09:25 <marostegui> Disable notifications for lag checks on dbstore1002 - T210478 [production]
03:22 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/489434 (Create quibble-stretch-hhvm, replacing jessie) [releng]
02:06 <Krinkle> Updating docker-pkg files on contint1001 for https://gerrit.wikimedia.org/r/489430 [releng]
2019-02-09 §
21:42 <Reedy> running `foreachwiki refreshImageMetadata.php --mediatype BITMAP --mime image/vnd.djvu --force` on mwmaint1002 T215635 [production]
21:41 <Reedy> refreshImageMetadata.php for commonswiki done T215635 [production]
19:16 <bd808> Updated to 2494e6a Add link to Toolforge admin console [tools.admin]
16:51 <Jeff_Green> restarted icinga process on icinga1001 because of passive check alert-storm [production]
13:53 <hauskatze> Revert last cronjob migrations back to trusty due to some Py2 issues; webservice remains on stretch with k8s backend [tools.stewardbots]
12:55 <hauskatze> Migrate cronjobs from trusty to stretch [tools.stewardbots]
12:54 <hauskatze> Migrate webservice to stretch and use kubernetes [tools.stewardbots]
03:41 <Jayprakash12345> Migrating from Ubuntu Trusty job grid to Debian Stretch [tools.indic-wsstats]
2019-02-08 §
23:23 <Reedy> running `refreshImageMetadata.php --mediatype BITMAP --mime image/vnd.djvu --force` against commonswiki on mwmaint1002 T215635 (this time we mean it) [production]
22:56 <Reedy> running `refreshImageMetadata.php --mediatype BITMAP --mime image/vnd.djvu` against commonswiki on mwmaint1002 T215635 [production]
22:52 <mutante> - creating throw away instance to test if T128642 is still true or not [wikistats]
22:50 <mutante> - shutting down and deleted instance T210008 once made for testing krypton upgrade but broken somehow [wikistats]
21:40 <bd808> Restarted webservice [tools.bash]
21:28 <MacFan4000> moving Cron from trusty to stretch [tools.zppixbot]
21:25 <reedy@deploy1001> Synchronized multiversion/MWMultiVersion.php: Move variable (duration: 00m 49s) [production]
21:22 <andrewbogott> deleting VMS and project as per T185430 [wikidataconcepts]