6401-6450 of 10000 results (49ms)
2016-02-13 §
01:18 <mutante> omg testing this log feature that logs straight to tickets (T108720) [production]
00:42 <krinkle@mira> Synchronized wmf-config/CommonSettings-labs.php: (no message) (duration: 01m 14s) [production]
00:28 <krinkle@mira> Synchronized wmf-config/CommonSettings-labs.php: (no message) (duration: 01m 17s) [production]
00:10 <mutante> ruthenium - restarting parsoid, now works out of /srv/ [production]
2016-02-12 §
23:54 <hashar> beta cluster broken since 20:30 UTC https://logstash-beta.wmflabs.org/#/dashboard/elasticsearch/fatalmonitor havent looked [releng]
23:37 <mutante> ruthenium - moving parsoid path, cleaning up old resources [production]
22:33 <ori@mira> Synchronized php-1.27.0-wmf.13/extensions/MobileFrontend/extension.json: I315628aef3: Don't use 'qlow' for NetSpeed=B (duration: 01m 16s) [production]
21:30 <tgr@mira> Synchronized wmf-config/InitialiseSettings.php: T125455: log session-ip channel to logstash (duration: 01m 17s) [production]
21:22 <ori@mira> Synchronized docroot and w: Ifc5b02cba4: Speed trials: add no-srcset variant (duration: 01m 16s) [production]
19:56 <chasemp> nfs traffic shaping pilot round 2 [tools]
19:26 <bd808@mira> Synchronized php-1.27.0-wmf.13/extensions/Disambiguator/Disambiguator.hooks.php: Check for array index existence (7b5f87f) (T126651) (duration: 01m 15s) [production]
19:16 <bd808> Wikis back up thankfully [production]
19:15 <bd808@mira> Synchronized php-1.27.0-wmf.13/includes/session/SessionManager.php: Log multiple IPs using the same session or the same user account (4d8b8ca) (T125455) (duration: 01m 17s) [production]
19:15 <bd808> Synced files for T125455 in wrong order; broke all wikis [production]
19:14 <bd808@mira> Synchronized php-1.27.0-wmf.13/includes/Setup.php: Log multiple IPs using the same session or the same user account (4d8b8ca) (duration: 01m 18s) [production]
19:12 <bd808@mira> Synchronized php-1.27.0-wmf.13/includes/DefaultSettings.php: Log multiple IPs using the same session or the same user account (4d8b8ca) (duration: 01m 16s) [production]
18:34 <krenair@mira> Synchronized docroot/noc: https://gerrit.wikimedia.org/r/#/c/270143/ (duration: 01m 15s) [production]
18:09 <krenair@mira> Synchronized php-1.27.0-wmf.13/extensions/WikimediaMaintenance/dumpInterwiki.php: https://gerrit.wikimedia.org/r/#/c/270328/ (duration: 01m 16s) [production]
17:55 <krenair@mira> Synchronized wmf-config/interwiki.php: https://gerrit.wikimedia.org/r/#/c/270327/ (duration: 01m 18s) [production]
17:41 <bd808> Upgraded to c599c8f to use requests[security] for SNI support (T126714) [tools.stashbot]
17:39 <mutante> wikibugs broken in operations and other channels [production]
17:36 <hashar> salt -v '*slave-trusty*' cmd.run 'apt-get -y install texlive-generic-extra' # T126422 [releng]
17:32 <hashar> adding texlive-generic-extra on CI slaves by cherry picking https://gerrit.wikimedia.org/r/#/c/270322/ - T126422 [releng]
17:19 <hashar> get rid of integration-dev it is broken somehow [releng]
17:13 <_joe_> reloading apache on all the appservers [production]
17:10 <hashar> Nodepool back at spawning instances. contintcloud has been migrated in wmflabs [releng]
17:06 <hashar> CI is processing jobs again. Nodepool instances are spawning [production]
17:03 <_joe_> soft-reloading apache on half of appservers [production]
16:51 <thcipriani> running sudo salt '*' -b '10%' deploy.fixurl to fix deployment-prep trebuchet urls [releng]
16:39 <jynus> purging rows from analytics-slave as requested (eventlogging database) [production]
16:33 <gehel> starting to ship logs from elasticsearch to logstash (https://gerrit.wikimedia.org/r/#/c/269100/) [deployment-prep]
16:31 <hashar> bd808 added support for saltbot to update tasks automagically!!!! T108720 [releng]
16:15 <hashar> the pool of CI slaves is exhausted, no more jobs running (scheduled labs maintenance) [releng]
16:15 <hashar> the pool of CI slaves is exhausted, no more jobs running (scheduled labs maintenance) [production]
15:53 <andrewbogott> disabling puppet on labcontrol1001 [production]
15:31 <urandom> restoring compactor thread count to 10 on restbase1002.eqiad [production]
15:06 <Jeff_Green> flip barium rdns and mta hostname from barium.wm.o to civicrm.wm.o [production]
14:41 <jynus@mira> Synchronized wmf-config/db-eqiad.php: Reducing db1072 weight a bit (duration: 01m 16s) [production]
14:31 <bblack> upgrading openssl on cp1068 [production]
14:05 <cmjohnson1> replaced failed disk db1021 [production]
13:56 <cmjohnson1> ms-be1008 replacing /dev/sdd slot 3 T126627 [production]
13:01 <godog> restart thumbs swiftrepl, auth token expired T125791 [production]
12:21 <jynus> ongoing conversion on db1024, expect some lag (depooled, downtimed) [production]
11:48 <godog> reboot ms-be2003 to pick up new disk in the right order T125200 [production]
11:36 <moritzm> uploaded openssl 1.0.2f-1~wmf3 for jessie-wikimedia to carbon [production]
11:28 <moritzm> updated cp1008 to openssl 1.0.2f-1~wmf3 [production]
11:25 <jynus> deploying required filtering before adding adywiki to labs [production]
09:39 <godog> restbase1001 nodetool cleanup && nodetool stop compaction [production]
09:36 <godog> restbase1002 cleanup nodetool cleanup local_group_wikipedia_T_parsoid_dataW4ULtxs1oMqJ && stop compactions [production]
07:55 <moritzm> repooling elastic1023, hw problem has been fixed [production]