5251-5300 of 8885 results (24ms)
2015-10-23 §
21:13 <marxarelli> deployment-db1 binlog deployment-db1-bin.000062 appears corrupt [releng]
20:54 <marxarelli> deployment-db2 shows slave io but slave sql failed on duplicate key [releng]
18:59 <twentyafterfour> deleted atop.log.* files on deployment-bastion. when are we going to enlarge /var on this instance. grr [releng]
18:58 <marxarelli> Killed mysql process 15840440 on account of its gargantuan temp file filling up /mnt [releng]
2015-10-22 §
10:36 <hashar> integration: cherry picked https://gerrit.wikimedia.org/r/#/c/244748/ "contint: install npm/grunt-cli with npm" , giving it a try one host a time [releng]
10:31 <hashar> integration disabling puppet <tt>salt --show-timeout --timeout=10 '*' cmd.run 'puppet agent --disable "install npm/grunt-cli via puppet https://gerrit.wikimedia.org/r/#/c/244748/"'</tt> [releng]
10:05 <hashar> salt-key -d deployment-logstash2.eqiad.wmflabs [releng]
10:05 <hashar> salt-key -d deployment-urldownloader.eqiad.wmflabs [releng]
10:04 <hashar> integration: clean up downloaded apt packages which are filing /var/cache/apt/archives on a few instances <tt>salt --show-timeout '*' cmd.run 'apt-get clean'</tt> [releng]
10:03 <hashar> beta: clean up downloaded apt packages which are filing /var/cache/apt/archives on a few instances (ex: 4GBytes on mediawiki02) <tt>salt --show-timeout '*' cmd.run 'apt-get clean'</tt> [releng]
10:01 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]
10:00 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]
09:46 <hashar> beta-cluster: I have deleted some incorrect salt minions with: salt-key -d i-0000* [releng]
2015-10-21 §
20:26 <legoktm> deploying https://gerrit.wikimedia.org/r/247916 [releng]
17:33 <jzerebec1i> reloading zuul for 9362473..ec1313d [releng]
16:57 <thcipriani> deployed restbase to deployment-restbase0{1,2} with scap3 [releng]
2015-10-19 §
17:17 <jzerebecki> reloading zuul for d9cae0a..9362473 [releng]
16:46 <jzerebecki> reloading zuul for c6ca369..d9cae0a [releng]
14:51 <hashar> Adding CirrusSearch to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247280/ ) [releng]
14:38 <hashar> Adding PdfHandler to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247278/ ) [releng]
14:26 <hashar> Adding TimedMediaHandler to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247273/ ) [releng]
14:05 <hashar> Adding MwEmbedSupport to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247271/ ) [releng]
13:40 <hashar> Adding Cite to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247266/ ) [releng]
13:18 <hashar> Adding Elastica to the extensions gate ( https://gerrit.wikimedia.org/r/#/c/247264/ ) [releng]
2015-10-17 §
03:45 <bd808> Freed diskspace on deployment-bastion with `sudo /etc/cron.daily/acct; sudo rm /var/log/account/pacct.?*` [releng]
2015-10-16 §
20:51 <hashar> Restarting Jenkins to remove potential dead locks before the week-end [releng]
20:34 <hashar> cancelled a bunch of https://integration.wikimedia.org/ci/job/mediawiki-core-doxygen-publish/ jobs. We keep rebuilding over and over REL* merged changes [releng]
20:24 <hashar> disconnected / reconnected a bunch of trusty slaves. Seems some node executors were disabled/deadlocked [releng]
12:56 <hashar> Added Cards and RelatedArticles to the shared jobs mediawiki-extensions-*' https://gerrit.wikimedia.org/r/#/c/246818/ [releng]
2015-10-15 §
22:27 <legoktm> deploying https://gerrit.wikimedia.org/r/246785 [releng]
20:33 <SMalyshev> cherry-picked https://gerrit.wikimedia.org/r/#/c/240888/1 to deployment-puppetmaster.eqiad.wmflabs to test portal deployment [releng]
04:34 <bd808> freed another 258M on deployment-bastion by forcing an early rotation of /var/log/account/pacct and deleting the archived copy [releng]
04:19 <bd808> Freed 290M on deployment-bastion:/var by deleting old pacct files [releng]
2015-10-14 §
18:02 <legoktm> deploying https://gerrit.wikimedia.org/r/246276 [releng]
16:22 <Krinkle> Reloading Zuul to deploy https://gerrit.wikimedia.org/r/246256 [releng]
13:02 <hashar> Adjusting Jenkins SMTP server from polonium.wikimedia.org to mx1001.wikimedia.org [releng]
2015-10-13 §
03:21 <bd808> Updated scap to 13c2af4 (Fix sync-dblist to go with dblist moves to folder) [releng]
2015-10-12 §
08:54 <hashar> zuul-merger process leaked file descriptors and ended up unable to open any more files. Fixed by restarting the service on gallium. https://phabricator.wikimedia.org/T115243 [releng]
08:44 <hashar> Zuul CI in trouble. zuul-merger can't not apply patches anymore https://phabricator.wikimedia.org/T115243 [releng]
02:43 <Krenair> fixed puppet on deployment-conf03 by running several manual apt-get commands [releng]
00:38 <bd808> fixed puppet on deploymnet-restbase01 by running several manual apt-get and dpkg commands; had to downgrade zsh from 5.1.1-1 (unstable) to 5.0.7-5 (stable) [releng]
00:27 <bd808> puppet failing on deployment-restbase01 due to corrupt apt config state [releng]
2015-10-11 §
14:57 <legoktm> deploying https://gerrit.wikimedia.org/r/244192 [releng]
2015-10-09 §
21:57 <greg-g> 21:51 < ori> !log deployment-prep Accidentally clobbered /etc/init.d/mysql on deployment-db1, causing deployment-prep failures. Restored now [releng]
21:55 <twentyafterfour> deployment-db1 has a running mysqld again, shinken reports recovery. [releng]
21:41 <twentyafterfour> ori broke mariadb on deployment-db1 :-P [releng]
20:28 <hashar> beta cluster parsoid now runs from /parsoid.git && npm install (was from /deploy.git previously). In case of troubles poke subbu and see revert instructions on https://phabricator.wikimedia.org/T92871 [releng]
20:16 <hashar> Parsoid on beta is broken. Busy installing npm dependencies [releng]
20:09 <hashar> switching Parsoid on beta to install dependencies with npm (instead of /deploy) https://phabricator.wikimedia.org/T92871 for subbu [releng]
14:54 <hashar> added Geodata as a dependency to the wikibase jobs ( https://gerrit.wikimedia.org/r/#/c/244489/ ) [releng]