1851-1900 of 10000 results (17ms)
2013-11-07 ยง
23:26 <ori> synchronized docroot/bits/static-current 'Update static-current symlinks to 1.23wmf3' [production]
22:40 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
22:20 <RoanKattouw> Reloading zuul config for new oojs-core and oojs-ui pipelines [production]
21:16 <anomie> synchronized php-1.23wmf3/resources/jquery/jquery.spinner.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:16 <anomie> synchronized php-1.23wmf3/maintenance/jsduck/external.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:16 <anomie> synchronized php-1.23wmf3/maintenance/jsduck/config.json 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:15 <anomie> synchronized php-1.23wmf3/resources/Resources.php 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:15 <anomie> synchronized php-1.23wmf3/skins/common/protect.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:15 <anomie> synchronized php-1.23wmf3/skins/common/upload.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:15 <anomie> synchronized php-1.23wmf2/resources/jquery/jquery.spinner.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:14 <anomie> synchronized php-1.23wmf2/maintenance/jsduck/external.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:14 <anomie> synchronized php-1.23wmf2/maintenance/jsduck/config.json 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:14 <anomie> synchronized php-1.23wmf2/resources/Resources.php 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:13 <anomie> synchronized php-1.23wmf2/skins/common/protect.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:13 <anomie> synchronized php-1.23wmf2/skins/common/upload.js 'Backport gerrit change 94161 to fix regression since 1.23wmf1' [production]
21:00 <Reedy> Created betafeatures database table on metawiki and commonswiki [production]
20:53 <reedy> synchronized wmf-config/InitialiseSettings.php [production]
20:35 <^d> elastic: in-place reindexing for all wikis running cirrus as primary [production]
20:34 <reedy> Finished syncing Wikimedia installation... : Ensure l10n cache is up to date [production]
20:26 <reedy> Started syncing Wikimedia installation... : Ensure l10n cache is up to date [production]
20:14 <demon> synchronized php-1.23wmf2/extensions/CirrusSearch 'Cirrus to master' [production]
20:06 <LeslieCarr> put sampling configuration on cr1-sdtpa - possible problems = high RE cpu utilization, which could cause network instability [production]
19:49 <reedy> synchronized wmf-config/ [production]
19:26 <reedy> synchronized w/robots.php [production]
19:18 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: phase1 wikis to 1.23wmf3 [production]
19:12 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: all wikipedias to 1.23wmf2 [production]
19:09 <reedy> synchronized php-1.23wmf3/extensions/Wikibase [production]
19:08 <reedy> synchronized docroot and w [production]
19:07 <reedy> synchronized wmf-config/ [production]
18:33 <^d> for gerrit [production]
18:33 <^d> reloading replication plugin [production]
18:33 <reedy> rebuilt wikiversions.cdb and synchronized wikiversions files: testwiki back to 1.23wmf2 [production]
18:27 <reedy> Finished syncing Wikimedia installation... : testwiki to 1.23wmf3 and build l10n cache [production]
18:14 <reedy> Started syncing Wikimedia installation... : testwiki to 1.23wmf3 and build l10n cache [production]
18:09 <reedy> synchronized docroot and w [production]
18:07 <reedy> synchronized php-1.23wmf3 'Staging' [production]
17:10 <gwicke> deployed Parsoid 986c1e78708 [production]
17:10 <ori-l> Ran 'varnishadm ban.url .' on cp1045 & cp1058 [production]
17:01 <gwicke> roll back Parsoid deploy as we are not able to purge the cache [production]
16:22 <gwicke> deployed Parsoid 986c1e787088 [production]
16:14 <jeremyb> morebots running on tools-login again. will troubleshoot more later [production]
16:13 <hashar> jenkins : reducing number of executor on gallium from 8 to 5, we have lanthanum now. [production]
16:11 <jeremyb> hi hashar! [production]
16:07 <hashar> hello jeremyb [production]
16:06 <jeremyb> hello world [production]
15:52 <cmjohnson1> dns udpate [production]
15:48 <jeremyb> moved morebots back to the grid [production]
11:27 <hasharDojo> gallium : killed git process [production]
11:18 <hasharDojo> gallium on swap death thanks to git eating all memory :/ [production]
10:42 <hashar> gallium / jenkins : running git gc --agressive on Zuul git repositories under /srv/ssd/zuul/git [production]