4601-4650 of 10000 results (44ms)
2016-07-25 §
01:38 <jynus> m2 replication on db2011 stopped, master binlog pos: db1020-bin.000968:1013334195 [production]
01:37 <hashar> scandium: restarted zuul-merger [production]
01:36 <ostriches> ytterbium: Stopped puppet, stopped gerrit process. [production]
01:34 <mutante> switched gerrit-new to gerrit in DNS [production]
01:30 <ostriches> lead: stopped puppet for a few minutes [production]
01:17 <hashar> scandium: migrating zuul-merger repos to lead find /srv/ssd/zuul/git -path '*/.git/config' -print -execdir sed -i -e 's/ytterbium.wikimedia.org/lead.wikimedia.org/' config \; [production]
01:10 <hashar> stopping CI [production]
01:09 <jynus> reviewdb backup finished, available on db1020:/srv/tmp/2016-07-25_00-54-31/ [production]
01:02 <ostriches> rsyncing latest git data from ytterbium to lead [production]
00:57 <mutante> manually deleted reviewer-counts cron from gerrit2 user, runs as root and puppet does not remove crons unless ensure=>absent [production]
00:55 <jynus> starting hot backup of db1020's reviewdb [production]
2016-07-24 §
02:25 <l10nupdate@tin> ResourceLoader cache refresh completed at Sun Jul 24 02:25:08 UTC 2016 (duration 4m 34s) [production]
02:20 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.11) (duration: 08m 59s) [production]
2016-07-23 §
15:38 <godog> stop swift in esams test cluster, lots of logging from there [production]
15:37 <godog> lithium sudo lvextend --size +10G -r /dev/mapper/lithium--vg-syslog [production]
04:58 <ori> Gerrit is back up after service restart; was unavailable between ~ 04:29 - 04:57 UTC [production]
04:56 <ori> Restarting Gerrit on ytterbium [production]
04:48 <ori> Users report Gerrit is down; on ytterbium java is occupying two cores at 100% [production]
03:48 <chasemp> gnt-instance reboot seaborgium.wikimedia.org [production]
02:26 <l10nupdate@tin> ResourceLoader cache refresh completed at Sat Jul 23 02:26:49 UTC 2016 (duration 5m 41s) [production]
02:21 <mwdeploy@tin> scap sync-l10n completed (1.28.0-wmf.11) (duration: 08m 24s) [production]
01:02 <tgr@tin> Synchronized php-1.28.0-wmf.11/extensions/CentralAuth/includes/CentralAuthPlugin.php: T141160 (duration: 00m 29s) [production]
01:01 <tgr@tin> Synchronized php-1.28.0-wmf.11/extensions/CentralAuth/includes/CentralAuthHooks.php: T141160 (duration: 00m 27s) [production]
01:00 <tgr@tin> Synchronized php-1.28.0-wmf.11/extensions/CentralAuth/includes/CentralAuthPrimaryAuthenticationProvider.php: T141160 (duration: 00m 28s) [production]
00:37 <tgr> doing an emergency deploy of https://gerrit.wikimedia.org/r/#/c/300679 for T141160, creates dozens of new users per hour to be unattached on loginwiki which probably has weird consequences [production]
2016-07-22 §
22:19 <aaron@tin> Synchronized wmf-config/InitialiseSettings.php: Enable debug logging for DBTransaction (duration: 00m 38s) [production]
21:10 <ejegg> updated civicrm from 2f4805fa2d2a7c57881408be2b3a017d26d8f43e to d657255e1edebeccfc0a03bea70b78eb11375cf8 [production]
20:58 <ejegg> disabled Worldpay audit parser job [production]
18:59 <ejegg> rolled back payments from 79d2b67067fd7e579372b63e0d619eccfa3b9143 to 79cb53998c41f72d0fa49130ed1f66dc112b478c [production]
18:54 <mutante> restart grrrit-wm [production]
16:05 <Jeff_Green> running authdns-update to correct a DKIM public key on wikipedia.org [production]
15:24 <anomie> Starting script to populate empty gu_auth_token [[phab:T140478]] [production]
15:16 <urandom> T140825: Restarting Cassandra to apply 8MB trickle_fsync (restbase1015-a.eqiad.wmnet) [production]
14:21 <gehel> rolling restart of logstash100[1-3] - T141063 [production]
14:19 <urandom> T134016: Boostrapping restbase2004-c.codfw.wmnet [production]
12:42 <jynus> applying new m5 db grants [production]
11:12 <jynus> reimage dbproxy1009 T140983 [production]
11:04 <jynus> applying new m2 db grants [production]
10:47 <jynus> reimage dbproxy1007 T140983 [production]
10:36 <jynus> applying new m1 db grants [production]
10:27 <hashar> Restarting Jenkins entirely (deadlocked) [production]
10:23 <hashar> Jenkins has some random deadlock. Will probably reboot it [production]
09:45 <jynus> reimage dbproxy1006 [production]
09:36 <jynus> applying new m3 db grants [production]
08:19 <jynus> reimage dbproxy1008 [production]
06:43 <jynus> updating dns records: m3-slave to db1043; m2-master to dbproxy1002 [production]
04:08 <jynus> backing up, shutting down and reimage db1043 [production]
03:14 <jynus> stopping db1043 db [production]
03:06 <twentyafterfour> restarted apache2 and phd on iridium [production]
03:04 <jynus> reverting m3-master dns back to the proxy [production]