1151-1200 of 1254 results (14ms)
2015-09-07 §
17:57 <YuviPanda> created tools-k8s-master-01 with jessie, will be etcd and kubernetes master [tools]
2015-09-03 §
07:09 <valhallasw`cloud> and just re-running puppet solves the issue. Sigh. [tools]
07:09 <valhallasw`cloud> last message in puppet.log.1.gz is Error: /Stage[main]/Toollabs::Exec_environ/Package[fonts-ipafont-gothic]/ensure: change from 00303-5 to latest failed: Could not get latest version: Execution of '/usr/bin/apt-cache policy fonts-ipafont-gothic' returned 100: fonts-ipafont-gothic: (...) E: Cache is out of sync, can't x-ref a package file [tools]
07:07 <valhallasw`cloud> err, is empty. [tools]
07:07 <valhallasw`cloud> uppet failure on tools-exec-1215 is CRITICAL 66.67% of data above the critical threshold -- but /var/log/puppet.log doesn't exist?! [tools]
2015-09-02 §
13:58 <valhallasw`cloud> rebooting tools-exec-1403; https://phabricator.wikimedia.org/T107052 happening, also causing significant NFS server load [tools]
13:55 <valhallasw`cloud> restarted gridengine_exec on tools-exec-1403 [tools]
13:53 <valhallasw`cloud> tools-exec-1403 does lots of locking opreations. Only job there was jid 1072678 = /data/project/hat-collector/irc-bots/snitch.py . Rescheduled that job. [tools]
13:16 <YuviPanda> deleted all jobs of ralgisbot [tools]
13:12 <YuviPanda> suspended all jobs in ralgisbot temporarily [tools]
12:57 <YuviPanda> rescheduled all jobs of ralgisbot, was suffering from stale NFS file handles [tools]
2015-09-01 §
21:01 <valhallasw`cloud> killed one of the grrrit-wm jobs; for some reason two of them were running?! Not sure what SGE is up to lately. [tools]
15:47 <valhallasw`cloud> git reset --hard cdnjs on tools-web-static-01 [tools]
06:23 <valhallasw`cloud> seems to have worked. SGE :( [tools]
06:17 <valhallasw`cloud> going to restart sge_qmaster, hoping this solves the issue :/ [tools]
06:07 <valhallasw`cloud> e.g. "queue instance "task@tools-exec-1211.eqiad.wmflabs" dropped because it is overloaded: np_load_avg=1.820000 (= 0.070000 + 0.50 * 14.000000 with nproc=4) >= 1.75" but the actual load is only 0.3?! [tools]
06:06 <valhallasw`cloud> test job does not get submitted because all queues are overloaded?! [tools]
06:06 <valhallasw`cloud> investigating SGE issues reported on irc/email [tools]
2015-08-31 §
21:21 <valhallasw`cloud> webservice: error: argument server: invalid choice: 'generic' (choose from 'lighttpd', 'tomcat', 'uwsgi-python', 'nodejs', 'uwsgi-plain') (for tools.javatest) [tools]
21:20 <valhallasw`cloud> restarted webservicemonitor [tools]
21:19 <valhallasw`cloud> seems to have some errors in restarting: subprocess.CalledProcessError: Command '['/usr/bin/sudo', '-i', '-u', 'tools.javatest', '/usr/local/bin/webservice', '--release', 'trusty', 'generic', 'restart']' returned non-zero exit status 2 [tools]
21:18 <valhallasw`cloud> running puppet agent -tv on tools-services-02 to make sure webservicemonitor is running [tools]
21:15 <valhallasw`cloud> several webservices seem to actually have not gotten back online?! what on earth is going on. [tools]
21:10 <valhallasw`cloud> some jobs still died (including tools.admin). I'm assuming service.manifest will make sure they start again [tools]
20:29 <valhallasw`cloud> |sort is not so spread out in terms of affected hosts because a lot of jobs were started on lighttpd-1409 and -1410 around the same time. [tools]
20:25 <valhallasw`cloud> ca 500 jobs @ 5s/job = approx 40 minutes [tools]
20:23 <valhallasw`cloud> doh. accidentally used the wrong file, causing restarts for another few uwsgi hosts. Three more jobs dead *sigh* [tools]
20:21 <valhallasw`cloud> now doing more rescheduling, with 5 sec intervals, on a sorted list to spread load between queues [tools]
19:36 <valhallasw`cloud> last restarted job is 1423661, rest of them are still in /home/valhallaw/webgrid_jobs [tools]
19:35 <valhallasw`cloud> one per second still seems to make SGE unhappy; there's a whole set of jobs dying, mostly uwsgi? [tools]
19:31 <valhallasw`cloud> https://phabricator.wikimedia.org/T110861 : rescheduling 521 webgrid jobs, at a rate of one per second, while watching the accounting log for issues [tools]
07:31 <valhallasw`cloud> removed paniclog on tools-submit; probably related to the NFS outage yesterday (although I'm not sure why that would give OOMs) [tools]
2015-08-30 §
13:23 <valhallasw`cloud> killed wikibugs-backup and grrrit-wm on tools-webproxy-01 [tools]
13:20 <valhallasw`cloud> disabling 503 error page [tools]
13:01 <YuviPanda> rebooted tools-bastion-01 to see if that remounts NFS [tools]
10:57 <valhallasw`cloud> started wkibugs from tools-webproxy-01 as well, still need to check if the phab<->redis part is still alive [tools]
10:55 <valhallasw`cloud> restarted grrrit-wm from tools-webproxy-01 [tools]
10:53 <valhallasw`cloud> Set error page on tools webserver via Hiera + some manual hacking (https://wikitech.wikimedia.org/wiki/Hiera:Tools) [tools]
2015-08-27 §
15:00 <valhallasw`cloud> killed multiple kmlexport processes on tools-webgrid-lighttpd-1401 again [tools]
2015-08-25 §
14:58 <YuviPanda> pooled in two new instances for the precise exec pool [tools]
14:45 <YuviPanda> reboot tools-exec-1221 [tools]
14:26 <YuviPanda> rebooting tools-exec-1220 because NFS wedge... [tools]
14:18 <YuviPanda> pooled in tools-webgrid-generic-1405 [tools]
10:16 <YuviPanda> created tools-webgrid-generic-1405 [tools]
10:04 <YuviPanda> apply exec node puppet roles to tools-exec-1220 and -1221 [tools]
09:59 <YuviPanda> created tools-exec-1220 and -1221 [tools]
2015-08-24 §
16:37 <valhallasw`cloud> more processes were started, so added a talk page message on [[User:Coet]] (who was starting the processes according to /var/log/auth.log) and using 'write coet' on tools-bastion-01 [tools]
16:15 <valhallasw`cloud> kill -9'ing because normal killing doesn't work [tools]
16:13 <valhallasw`cloud> killing all processes of tools.cobain which are flooding tools-bastion-01 [tools]
2015-08-20 §
18:44 <valhallasw`cloud> both are now at 3dbbc87 [tools]