4651-4700 of 10000 results (50ms)
2016-02-03 §
11:11 <moritzm> repooling restbase1001 [production]
11:08 <hashar> restarting beta cluster puppetmaster just in case [releng]
11:07 <hashar> beta: apt-get upgrade on delpoyment-cache* hosts and checking puppet [releng]
11:04 <akosiaris> OTRS database upgraded to 3.3, moving on with 4.0 [production]
11:00 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Repool db1063 at 100% load; depool db1067 for maintenance (duration: 01m 16s) [production]
10:59 <hashar> integration/beta: deleting /etc/apt/apt.conf.d/*proxy files. There is no need for them, in fact web proxy is not reachable from labs [releng]
10:53 <hashar> integration: switched puppet repo back to 'production' branch, rebased. [releng]
10:49 <hashar> various beta cluster have puppet errors .. [releng]
10:48 <moritzm> depooling restbase1001 for kernel/Java update [production]
10:46 <hashar> integration-slave-trusty-1013 heading to out of disk space on /mnt ... [releng]
10:42 <hashar> integration-slave-trusty-1016 out of disk space on /mnt ... [releng]
10:37 <_joe_> ending the load test on the eqiad apaches [production]
10:11 <moritzm> reboot francium for kernel update [production]
09:53 <jynus> m2 backup finished on /srv/backups/2016-02-03_08-51-06, filename 'db1020-bin.000842', position 220103947 [production]
09:50 <moritzm> restarting neodymium for kernel update [production]
09:49 <_joe_> doing some basic load test on appservers in eqiad [production]
08:52 <akosiaris> stop otrs-daemon on mendelevium [production]
08:51 <jynus> starting mysql backup on db1020 (/srv/backups) [production]
08:44 <akosiaris> stop slave on db2011, db1020's (m2-master) slave, for OTRS migration. DO NOT ENABLE [production]
08:40 <akosiaris> stop exim4, cron, apache2 on iodine, mendelevium [production]
08:39 <akosiaris> disabling puppet on iodine, mendelevium, OTRS migration [production]
08:24 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Repool db1063 with low weight (duration: 01m 20s) [production]
03:45 <bd808> Puppet failing on deployment-fluorine with "Error: Could not set uid on user[datasets]: Execution of '/usr/sbin/usermod -u 10003 datasets' returned 4: usermod: UID '10003' already exists" [releng]
03:44 <bd808> Freed 28G by deleting deployment-fluorine:/srv/mw-log/archive/*2015* [releng]
03:41 <bd808> Ran deployment-bastion.deployment-prep:/home/bd808/cleanup-var-crap.sh and freed 565M [releng]
03:00 <YuviPanda> upgraded flannel on all hosts running it [tools]
2016-02-02 §
23:13 <demon@mira> Finished scap: everything re-sync one more time for good measure (duration: 17m 04s) [production]
22:56 <demon@mira> Started scap: everything re-sync one more time for good measure [production]
22:50 <bblack> repooling scap proxies: mw10033, mw1070, mw1097, mw1216 [production]
22:45 <chasemp> restart hhvm & apache2 on mw1235.eqiad.wmnet [production]
22:44 <_joe_> restarted hhvm on mw1231, stat_cache again [production]
22:42 <demon@mira> Finished scap: resync final batch with master (duration: 06m 48s) [production]
22:35 <demon@mira> Started scap: resync final batch with master [production]
22:31 <demon@mira> Finished scap: re-sync batch of mw1136-50, mw1190-1220, mw2150-mw2200 with master (duration: 09m 33s) [production]
22:22 <demon@mira> Started scap: re-sync batch of mw1136-50, mw1190-1220, mw2150-mw2200 with master [production]
22:20 <ori> restarted HHVM on mw1243. Lock-up. Backtrace in /tmp/hhvm.2897.bt [production]
22:20 <demon@mira> Finished scap: re-sync batch of mw1101-1135,1240-1260, 2101-2150 with master (duration: 12m 51s) [production]
22:07 <demon@mira> Started scap: re-sync batch of mw1101-1135,1240-1260, 2101-2150 with master [production]
22:00 <demon@mira> Finished scap: re-sync batch of mw1151-mw1225, mw2174-mw2214 with master (duration: 11m 24s) [production]
21:48 <demon@mira> Started scap: re-sync batch of mw1151-mw1225, mw2174-mw2214 with master [production]
21:45 <demon@mira> Finished scap: re-sync batch of mw1051-1100, mw2051-2100 with master (duration: 13m 41s) [production]
21:31 <demon@mira> Started scap: re-sync batch of mw1051-1100, mw2051-2100 with master [production]
21:28 <demon@mira> Finished scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (2nd try) (duration: 14m 33s) [production]
21:27 <_joe_> depooling eqiad scap-proxies [production]
21:13 <demon@mira> Started scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (2nd try) [production]
21:04 <demon@mira> scap aborted: re-sync batch of mw1025-1050 and mw2007-mw2050 with master (duration: 10m 11s) [production]
20:54 <demon@mira> Started scap: re-sync batch of mw1025-1050 and mw2007-mw2050 with master [production]
20:32 <hashar> mw1114-mw1119 are canary api appservers Finished syncing [production]
20:28 <ori> restarted hhvm on mw1116 [production]
20:17 <hashar> Running sync-common on mw1114-mw1119 (canary api appservers) [production]