- Joined
- Mar 12, 2007
- Messages
- 1,356
- Reaction score
- 16
Is it just me, or is GT running really really slow today ?
Frustrating!
Frustrating!
yup. was just about to post something. this is exactly why i don't recommend this program to people, and it's a shame. the concept is extremely good, the execution is mediocre at best. i'd email them and complain and try to get money back, but i'm a) already hooked into daily reviews and keeping up with it (probably have undx ocdpd/o) and b) they rarely answer in a timely fashion. i guess this is why it's so cheap...
Same here! Server really slow...annoying as hell! Thought it was me and restarted computer, but it's still happening. I agree this program could very easily replace First Aid...the content and organization is similar to FA or much better, like Micro for instance. Their Path organization could use some work; if they upgrade their server + fix the few content/ organization issues = big win. I think a ton of people may be using it and hence the server problems.
Yeah, it's still slow. Just what I need when i have 600 friggin' cards in my schedule.
Unreal. If this continues, I'm going to have to abort GT. Wasting too much time waiting for it to load!
I'll probably try to finish out GT's "Q-bank" tho'... sigh.
Same, I might have to drop GT from my plans if it stays this slow. Same goes for my recommendations of the site GT!! FIX IT!
Does GT even have a dedicated customer service person/department? Or is it just somebody responding to every tenth email between bites of a pastrami sandwich once or twice a month?
Got this email from one of technicians earlier (I removed their name/contact info).
My name is ________ and I'm one of the engineers at Gunner. I wanted to send you a quick note to apologize if you've experienced slow load times on Gunner over the past few days. I'm just finishing medical school myself so I know how important these weeks are for Step 1. We're aware of the problem and are at work now to resolve the issue and get the Gunner servers back on track.
I haven't had the chance to connect with many of our Gunner users in the past and I'd like to change that right now by inviting you to email me any time at ______________. I'd be happy to hear any suggestions, rants, calls for help, questions about the wards, anything. Seriously.
I'm also thinking about starting an email list or RSS for regular updates about what's going on at Gunner so no one ever feels left out of the loop. If you think that would be useful, definitely send me a note.
So my apologies again about the server, we aim to be back up and running at full speed again soon.
Yay, thanks for the update!
Hey guys - I'm the Gunner engineer that sent out the email above (thanks for removing my personal info, that's very HIPPA, but see below). I've never had an account on SDN but it's too sad a story to hear of people ringing our doorbell when the lights are on but no one's coming to the door. Fail. I want to change that and offer you guys a real punching bag if you ever need to kick or shout or ask or anything. So no HIPPA here: my name is Nicholas and you can email me any time at nicholas at gunnertraining.
Now a quick explanation about the server issue for my tech buddies: Gunner runs on Ruby on Rails with a MySQL database and we use virtual servers on shared hosts. A few weeks ago we pulled the database out of the machine that's running the Rails instances and gave it its own virtual machine. That helped speed things up. Nice job us. Starting Sunday night the database throughput came screeching to a halt. Fail job us. We hadn't pushed any new code and weren't experiencing a burst of traffic, so the issue really was like a rogue wave, just not as cool. Just last week we started to use an awesome performance monitoring tool called New Relic that was recording how big the fail wave was in the database but - bigger fail wave - the alert system wasn't configured properly so no alarm bells went off until we saw some of the traffic on SDN, yesterday morning. Yes, punch me, I gave you the bag above: nicholas at GT. So we only started to examine the issue yesterday morning and were initially very confused by what we were seeing. Rogue waves like that don't usually make sense in the absence of a traffic spike or a recent code merge or some other manipulation. After trolling our logs and running this test and that test, we learned from our shared hosting provider that another VM on the same physical box as our database VM had started to encounter some significant memory leaks on Sunday evening and had drained the resources for that physical server, our virtual server included. The problem wasn't resolved until this morning, at which time the rogue wave abated and the server started running smoothly again.
The learning points are two: New Relic is golden for monitoring website performance but it's pretty quiet if you don't have the alert system configured correctly, and shared hosting is a very bad idea. If the VM next to you has tuberculosis, your VM gets tuberculosis. So we'll be switching to our own physical server before too long, a nice, sterile, TB-free server all to ourselves, and within its walls we can set up clean VMs to our hearts' content, no isolation precautions.
So thank you everyone for hanging in there. If you're ever left on the porch and feel like we're ignoring you, email me, Nicholas, at gunnertraining dot com. I'd be happy to hear from you.