3251-3300 of 5059 results (17ms)
2014-09-25 §
22:21 <bd808> cleaned up puppet repo with `git rebase origin/production; git submodule update --init --recursive` [releng]
22:18 <bd808> puppet repo on deployment-salt out of whack. I will try to fix. [releng]
11:36 <_joe_> updated hhvm to fix most bugs, also cherry-picked https://gerrit.wikimedia.org/r/#/c/162839/ [releng]
08:15 <hashar> beta: puppetmaster rebased [releng]
08:10 <hashar> beta: dropped a patch that reverted OCG LVS configuration ( https://gerrit.wikimedia.org/r/#/c/146860/ ), it has been fixed by https://gerrit.wikimedia.org/r/#/c/148371/ [releng]
08:04 <hashar> attempting to rebase beta cluster puppet master. Currently at 74036376 [releng]
2014-09-24 §
23:00 <bd808> Updated bash with salt [releng]
20:52 <cscott> updated OCG to version 48acb8a2031863e35fad9960e48af60a3618def9 [releng]
15:30 <hashar_> install additional fonts on jenkins slaves for browser screenshots ( https://gerrit.wikimedia.org/r/#/c/162604/ and https://bugzilla.wikimedia.org/69535 ) [releng]
09:57 <hashar_> upgraded Zuul on all integration labs instances [releng]
09:33 <hashar_> Jenkins switched mwext-UploadWizard-qunit back to Zuul cloner by applying pending change {{gerrit|161459}} [releng]
09:19 <hashar_> Upgrading Zuul to f0e3688 Cherry pick https://review.openstack.org/#/c/123437/1 which fix {{bug|71133}} ''Zuul cloner: fails on extension jobs against a wmf branch'' [releng]
2014-09-23 §
23:08 <bd808> Jenkins and deployment-bastion talking to each other again after six (6!) disconnect, cancel jobs, reconnect cycles [releng]
22:53 <greg-g> The dumb "waiting for executors" bug is https://bugzilla.wikimedia.org/show_bug.cgi?id=70597 [releng]
22:51 <bd808> Jenkins stuck trying to update database in beta again with the dumb "waiting for executors" bug/problem [releng]
20:14 <cscott> updated OCG to version 1cf9281ec3e01d6cbb27053de9f2423582fcc156 [releng]
17:37 <AaronSchulz> Initialized bloom cache on betalabs, enabled it, and populated it for enwiki [releng]
2014-09-22 §
16:09 <bd808> Ori updating HHVM to 3.3.0-20140918+wmf1 (from deployment-prep SAL) [releng]
16:08 <ori> updating HHVM to 3.3.0-20140918+wmf1 [releng]
09:37 <hashar_> Jenkins: deleting old mediawiki extensions jobs (<tt>rm -fR /var/lib/jenkins/jobs/*testextensions-master</tt>). They are no more triggered and superseded by the <tt>*-testextension</tt> jobs. [releng]
2014-09-20 §
21:30 <bd808> Deleted /var/log/atop.* on deployment-bastion to free some disk space in /var [releng]
21:29 <bd808> Deleted /var/log/account/pacct.* on deployment-bastion to free some disk space in /var [releng]
14:43 <andrewbogott> movingdeployment-pdf02 to virt1009 [releng]
00:36 <mutante> raised instance quota to 43 [releng]
2014-09-19 §
21:16 <hashar> puppet is broken on Trusty integration slaves because they try to install the non existing package php-parsekit. WIP will get it sorted on eventually. [releng]
14:57 <hashar> Jenkins friday deploy: migrate all MediaWiki extension qunit jobs to Zuul cloner. [releng]
00:26 <cscott> updated OCG to version ce16f7adb60d7c77409e2e11ba0e5d6cce6955d5 [releng]
2014-09-17 §
12:20 <hashar> upgrading jenkins 1.565.1 -> 1.565.2 [releng]
2014-09-16 §
16:36 <bd808> Updated scap to 663f137 (Check php syntax with parallel `php -l`) [releng]
15:44 <godog> testing scap change from https://gerrit.wikimedia.org/r/#/c/160668/ [releng]
04:01 <jeremyb> deployment-mediawiki02: salt was broken with a msgpack exception. mv -v /var/cache/salt{,.old} && service salt-minion restart fixed it. also did salt-call saltutil.sync_all [releng]
04:00 <jeremyb> deployment-mediawiki03: (/run was 99%) [releng]
03:59 <jeremyb> deployment-mediawiki03: rm -rv /run/hhvm/cache && service hhvm restart [releng]
02:46 <cscott> updated OCG to version 188a3c221d927bd0601ef5e1b0c0f4a9d1cdbd31 [releng]
00:51 <jeremyb> deployment-pdf01 removed base::firewall (ldap via wikitech) [releng]
2014-09-15 §
22:53 <jeremyb> deployment-pdf01: pkill -f grain-ensure [releng]
21:44 <andrewbogott> migrating deployment-videoscaler01 to virt1002 [releng]
21:41 <andrewbogott> migrating deployment-sentry2 to virt1002 [releng]
21:40 <cscott> *skipped* deploy of OCG, due to deployment-salt issues [releng]
21:36 <bd808> Trying to fix salt with `salt '*' service.restart salt-minion` [releng]
21:32 <bd808> only hosts responding to salt in beta are deployment-mathoid, deployment-pdf01 and deployment-stream [releng]
21:29 <bd808> salt calls failing in beta with errors like "This master address: 'salt' was previously resolvable but now fails to resolve!" [releng]
21:19 <bd808> Added Matanya to under_NDA sudoers group (bug 70864) [releng]
20:18 <hashar> restarted salt-master [releng]
19:50 <hashar> killed on deployment-bastion a bunch of <tt>python /usr/local/sbin/grain-ensure contains ... </tt> and <tt>/usr/bin/python /usr/bin/salt-call --out=json grains.append deployment_target scap</tt> commands [releng]
18:57 <hashar> scap breakage due to ferm is logged as https://bugzilla.wikimedia.org/show_bug.cgi?id=70858 [releng]
18:48 <hashar> https://gerrit.wikimedia.org/r/#/c/160485/ tweaked a default ferm configuration file which caused puppet to reload ferm. It ends up having rules that prevent ssh from other host thus breaking rsync \\O/ [releng]
18:37 <hashar> beta-scap-eqiad job is broken since ~17:20 UTC https://integration.wikimedia.org/ci/job/beta-scap-eqiad/21680/console || rsync: failed to connect to deployment-bastion.eqiad.wmflabs (10.68.16.58): Connection timed out (110) [releng]
2014-09-13 §
01:07 <bd808> Moved /srv/scap-stage-dir to /srv/mediawiki-staging; put a symlink in as a failsafe [releng]
00:31 <bd808> scap staging dir needs some TLC on deployment-bastion; working on it [releng]