2016-02-03
§
|
22:24 |
<bd808> |
Manually ran sync-common on deployment-jobrunner01.eqiad.wmflabs to pickup wmf-config changes that were missing (InitializeSettings, Wikibase, mobile) |
[releng] |
17:43 |
<marxarelli> |
Reloading Zuul to deploy previously undeployed Icd349069ec53980ece2ce2d8df5ee481ff44d5d0 and Ib18fe48fe771a3fe381ff4b8c7ee2afb9ebb59e4 |
[releng] |
15:12 |
<hashar> |
apt-get upgrade deployment-sentry2 |
[releng] |
15:03 |
<hashar> |
redeployed rcstream/rcstream on deployment-stream by using git-deploy on deployment-bastion |
[releng] |
14:55 |
<hashar> |
upgrading deployment-stream |
[releng] |
14:42 |
<hashar> |
pooled back integration-slave-trusty-1015 Seems ok |
[releng] |
14:35 |
<hashar> |
manually triggered a bunch of browser tests jobs |
[releng] |
11:40 |
<hashar> |
apt-get upgrade deployment-ms-be01 and deployment-ms-be02 |
[releng] |
11:32 |
<hashar> |
fixing puppet.conf on deployment-memc04 |
[releng] |
11:08 |
<hashar> |
restarting beta cluster puppetmaster just in case |
[releng] |
11:07 |
<hashar> |
beta: apt-get upgrade on delpoyment-cache* hosts and checking puppet |
[releng] |
10:59 |
<hashar> |
integration/beta: deleting /etc/apt/apt.conf.d/*proxy files. There is no need for them, in fact web proxy is not reachable from labs |
[releng] |
10:53 |
<hashar> |
integration: switched puppet repo back to 'production' branch, rebased. |
[releng] |
10:49 |
<hashar> |
various beta cluster have puppet errors .. |
[releng] |
10:46 |
<hashar> |
integration-slave-trusty-1013 heading to out of disk space on /mnt ... |
[releng] |
10:42 |
<hashar> |
integration-slave-trusty-1016 out of disk space on /mnt ... |
[releng] |
03:45 |
<bd808> |
Puppet failing on deployment-fluorine with "Error: Could not set uid on user[datasets]: Execution of '/usr/sbin/usermod -u 10003 datasets' returned 4: usermod: UID '10003' already exists" |
[releng] |
03:44 |
<bd808> |
Freed 28G by deleting deployment-fluorine:/srv/mw-log/archive/*2015* |
[releng] |
03:41 |
<bd808> |
Ran deployment-bastion.deployment-prep:/home/bd808/cleanup-var-crap.sh and freed 565M |
[releng] |
2016-02-01
§
|
23:53 |
<bd808> |
Logstash working again; I applied a change to the default mapping template for Elasticsearch that ensures that fields named "timestamp" are indexed as plain strings |
[releng] |
23:46 |
<bd808> |
Elasticsearch index template for beta logstash cluster making crappy guesses about syslog events; dropped 2016-02-01 index; trying to fix default mappings |
[releng] |
23:08 |
<bd808> |
HHVM logs causing rejections during document parse when inserting in Elasticsearch from logstash. They contain a "timestamp" field that looks like "Feb 1 22:56:39" which is making the mapper in Elasticsearch sad. |
[releng] |
23:02 |
<bd808> |
Elasticsearch on deployment-logstash2 rejecting all documents with 400 status. Investigating |
[releng] |
22:50 |
<bd808> |
Copying deployment-logstash2.deployment-prep:/var/log/logstash/logstash.log to /srv for debugging later |
[releng] |
22:48 |
<bd808> |
deployment-logstash2.deployment-prep:/var/log/logstash/logstash.log is 11G of fail! |
[releng] |
22:46 |
<bd808> |
root partition on deployment-logstash2 full |
[releng] |
22:43 |
<bd808> |
No data in logstash since 2016-01-30T06:55:37.838Z; investigating |
[releng] |
15:33 |
<hashar> |
Image ci-jessie-wikimedia-1454339883 in wmflabs-eqiad is ready |
[releng] |
15:01 |
<hashar> |
Refreshing Nodepool image. Might have npm/grunt properly set up |
[releng] |
03:15 |
<legoktm> |
deploying https://gerrit.wikimedia.org/r/267630 |
[releng] |