4301-4350 of 10000 results (29ms)
2013-02-14 §
22:29 <mflaschen> Started syncing Wikimedia installation... : [production]
22:21 <sbernardin> taking storage1 offline for decommissioning per: https://rt.wikimedia.org/Ticket/Display.html?id=4529 [production]
22:18 <RobH> pushing mw1161-1188 into apache service (sans 1161/1164) from false to true in pybal, otherwise all online [production]
22:16 <olivneh> synchronized wmf-config/CommonSettings.php 'Updating for test2wiki' [production]
22:14 <olivneh> synchronized php-1.21wmf9/extensions/Math [production]
22:03 <Ryan_Lane> replacing gluster upstarts on all labstore nodes with a regular init script [production]
22:01 <reedy> synchronized docroot [production]
22:00 <olivneh> synchronized wmf-config [production]
21:49 <olivneh> synchronized wmf-config/InitialiseSettings.php 'Enabling PostEdit on {be|mediawiki|outreach}wikis' [production]
21:45 <olivneh> synchronized php-1.21wmf9/extensions/EventLogging [production]
21:39 <sbernardin> taking down srv266 and srv268 for decommissioning ... https://rt.wikimedia.org/Ticket/Display.html?id=4534 [production]
21:31 <ottomata1> restarted solr jetty on solr2 [production]
21:30 <bsitu> Finished syncing Wikimedia installation... : Update ArticleFeedbackv5, Echo, PageTriage to master [production]
21:25 <ottomata> restarted solr jetty on solr3 and solr1003 for MaxSem [production]
21:03 <bsitu> Started syncing Wikimedia installation... : Update ArticleFeedbackv5, Echo, PageTriage to master [production]
20:24 <kaldari> synchronized wmf-config/CommonSettings.php 'updating CommonSettings.php for Echo' [production]
20:15 <apergos> rebooting mw1010 after some issues with OOM [production]
19:22 <RobH> new servers are NOT in pybal yet, and thus not getting traffic [production]
19:22 <RobH> just added mw1161-1200 into node lists for eventual service, some may throw sync script errors as I track them down [production]
19:15 <Ryan_Lane> rebooting labstore3 [production]
19:08 <Ryan_Lane> rebooting labstore4 [production]
19:01 <Ryan_Lane> rebooting labstore2 [production]
18:52 <Ryan_Lane> rebooting labstore1 [production]
15:22 <demon> synchronized wmf-config/InitialiseSettings.php 'Updating wikibase sort order' [production]
15:22 <demon> synchronized php-1.21wmf9/extensions/Wikibase/client/includes/SortUtils.php 'Updating wikibase sort order' [production]
14:52 <mutante> re-enabled cp1044 in pybal - mobile varnish cache servers all on precise now [production]
14:39 <hashar> Jenkins: adding in several extension to Zuul configuration {{gerrit|49041}} [production]
14:30 <hashar> jenkins : updated all jobs using latest version jjb-config [production]
13:35 <mark> Fixed fixing [production]
13:27 <mutante> disabling cp1044 in pybal, reinstalling with precise [production]
13:22 <mutante> re-enabling cp1043 in pybal [production]
13:11 <mark> Upgraded varnish to latest version with updated range patch on cp1021-1028 [production]
12:16 <mark> Inserted varnish 3.0.3plus~rc1-wm7 into the precise-wikimedia APT repository [production]
12:14 <mutante> stopping varnish backend on cp1043 and starting reinstall [production]
12:10 <mutante> disabling cp1043 in pybal [production]
11:31 <mutante> ms1004 - full disk, deleted large HTCPpurger.log file, fixed puppet runs [production]
02:29 <LocalisationUpdate> completed (1.21wmf9) at Thu Feb 14 02:29:05 UTC 2013 [production]
01:17 <andrewbogott> updated OpenStackManager to git head on virt0. Previous state is preserved (temporarily) in the local 'premerge' branch. [production]
01:13 <paravoid> removing mysql & apache from emery(!) [production]
00:03 <RobH> db29 os installed, handed off to peter [production]
2013-02-13 §
23:50 <RobH> db29 reinstalling [production]
23:48 <paravoid> reinstalling sq48 [production]
23:22 <RobH> db27 and db29 will be up and down for reinstallations [production]
23:17 <sbernardin> rebooting db29 per https://rt.wikimedia.org/Ticket/Display.html?id=4528 [production]
23:17 <sbernardin> rebooting db27 per https://rt.wikimedia.org/Ticket/Display.html?id=4528 [production]
23:11 <RobH> db1013 online and puppet updated, needs to be pushed into some db cluster service [production]
22:45 <RobH> mw27 locked up, powercycling [production]
22:36 <notpeter> powercycling all eqiad jobrunners [production]
22:13 <RobH> adding mw1201-1208 to eqiad apache pool [production]
21:44 <Reedy> Running sync-common on mw1203 [production]