3851-3900 of 10000 results (24ms)
2018-09-26 §
08:14 <hashar> Restarting CI Jenkins on contint1001 [releng]
2018-09-25 §
23:01 <marxarelli> configured new jenkins node integration-slave-docker-1043 with 6 executors [releng]
23:01 <marxarelli> replaced integration-slave-docker-1042 with new integration-slave-docker-1043 instance [releng]
22:39 <marxarelli> launching new integration-slave-docker-1042 bigram instance [releng]
22:33 <marxarelli> deleting remaining m1.medium instances used as m4executors (T205362) [releng]
22:15 <marxarelli> taking remaining m1.medium m4executor jenkins nodes offline (T205362) [releng]
18:16 <marxarelli> reconfiguring bigram jenkins nodes to use 6 executors. 7 were configured by mistake (T205362) [releng]
18:00 <marxarelli> configuring new integration-slave-docker-1041 jenkins node with 7 executors (T205362) [releng]
17:42 <marxarelli> configuring new jenkins node integration-slave-docker-1040 with 7 executors (T205362) [releng]
17:38 <marxarelli> launching integration-slave-docker-1041 bigram instance (T205362) [releng]
17:30 <marxarelli> the puppet parameter for docker_lvm_volume specified in horizon was not applied correctly on the first puppet run for some reason. tearing down integration-slave-docker-1039... [releng]
17:25 <marxarelli> launching integration-slave-docker-1040 bigram instance (T205362) [releng]
17:24 <marxarelli> deleting instances integration-slave-docker-1007/1008 (T205362) [releng]
17:13 <marxarelli> launching new integration-slave-docker-1039 bigram instance [releng]
17:12 <marxarelli> taking integration-slave-docker-1007/1008 offline for replacement (T205362) [releng]
17:09 <marxarelli> deleting integration-slave-docker-1030/1031 instances (T205362) [releng]
17:05 <marxarelli> taking integration-slave-docker-1030/1031 offline for replacement [releng]
16:47 <marxarelli> increasing executors to 7 for jenkins nodes integration-slave-docker-1033/1034 [releng]
16:46 <marxarelli> new instance creation delayed due to quota [releng]
16:45 <marxarelli> launching new integration-slave-docker-1039/1040 bigram instances [releng]
01:21 <legoktm> deployed https://gerrit.wikimedia.org/r/450508 [releng]
00:36 <legoktm> deploying https://gerrit.wikimedia.org/r/462609 [releng]
00:22 <legoktm> deploying https://gerrit.wikimedia.org/r/453447 [releng]
2018-09-24 §
20:21 <bearND> (beta): Update mobileapps to badb463 [releng]
10:55 <hashar> gerrit: granting labs/tools/* project owners the ability to submit changes | https://gerrit.wikimedia.org/r/#/c/labs/tools/+/462420/ [releng]
09:51 <hashar> deployment-deploy01 : backed up /srv/mediawiki-staging/php-master/cache/gitinfo and created a new. Its size of 69632 bytes might cause slow writes?? | T204762 [releng]
09:24 <hashar> Live hacked scap code on deployment-deploy01 for T204762 and reverted hack changes [releng]
08:32 <hashar> deployment-deploy01 rm -fR /tmp/scap_l10n_* [releng]
06:41 <legoktm> deploying https://gerrit.wikimedia.org/r/462341 [releng]
03:45 <kart_> Update cxserver to d913793 [releng]
2018-09-23 §
14:03 <Krenair> rm stuff in deployment-deploy01:/tmp to try to clear space and stop shinken whining [releng]
01:05 <andrewbogott> rebooted deployment-maps03; OOM and also T205195 [releng]
2018-09-22 §
20:51 <Hauskatze> github: deleting several wikimedia/mediawiki-extensions-Collection-.* mirror repos for T183891 [releng]
20:05 <Hauskatze> github: deleted mirror wikimedia/mediawiki-extensions-Collection-OfflineContentGenerator-zim_renderer | T183891; moving to the next one [releng]
18:21 <Krenair> went to do the same with deployment-maps03 and accidentally broke SSH access to the server [releng]
18:20 <Krenair> removed ferm package from deployment-snapshot01 as it appeared unmanaged by puppet and was causing problems with SSH access from the current deployment hosts (previous logs referenced T153468, this just explains why puppet hadn't purged stuff) [releng]
18:01 <Krenair> rm deployment-maps03:/etc/ferm/conf.d/10_redis_exporter_6379 as it was breaking ferm from starting (T153468), puppet has not re-created it so I assume it was historical (shouldn't puppet be purging such files?) [releng]
18:00 <Krenair> rm deployment-snapshot01:/etc/ferm/conf.d/10_prometheus-nutcracker-exporter as it was breaking ferm from starting (T153468), puppet has not re-created it so I assume it was historical (shouldn't puppet be purging such files?) [releng]
2018-09-21 §
17:26 <marxarelli> adding jenkins node integration-slave-docker-1038 with 7 executors [releng]
16:47 <marxarelli> added new jenkins node integration-slave-docker-1037 with 7 executors [releng]
15:49 <marxarelli> replacing integration-slave-docker-1036 with new bigram instance [releng]
15:48 <marxarelli> taking node integration-slave-docker-1035 offline due to unusually high steal cpu time and long build durations [releng]
15:17 <marxarelli> integration-slave-docker-1035/1036 showing unusually high cpu steal and unusually long mean build durations [releng]
15:15 <marxarelli> taking integration-slave-docker-1036 offline due to unusually high cpu steal % trend [releng]
15:13 <marxarelli> launching integration-slave-docker-1037 bigram instance [releng]
13:03 <Amir1> ores:7b987a7 is going beta [releng]
05:32 <legoktm> deployed https://gerrit.wikimedia.org/r/461510 [releng]
2018-09-20 §
23:48 <marxarelli> adding new integration-slave-docker-1035/1036 jenkins nodes, each with 7 executors [releng]
23:23 <marxarelli> launching integration-slave-docker-1035/1036 bigram instances [releng]
23:20 <marxarelli> taking integration-slave-docker-1004/1005 offline for replacement (T202160) [releng]