Pages

Wednesday, January 26, 2011

Hosted VOIP vs On site


As the world moves into the "cloud", telephony has made its way there. There are several vendors selling hosted Voip solutions to customers; while others are saying buy your own gear keep it on site and you control your own destiny. I've had the opportunity to work both, and while both have there Pro's and Con's companies need to understand what they are getting into and how to think about there future needs. IP telephony is not a one size fits all service and you have to closely think about the what you will be paying for.

Hosted VOIP Pro's
You can get a feature rich system, and all you pay for is the telephones, switches and a monthly fee for services you use. Some of the providers offering messaging to email, contact center, video, faxing services. You can get large corporation features in your small business that can give you an edge. You won't need an on site telephony person to take care of your needs, and you can even provision your own phones to add new users.


Your own equipment pro's:
You own your own gear. This means you are free to find the cheapest service you can get from a provider. Also if you don't like the service contract you purchased, you can go find someone else. All you need to do is change passwords and cut access. If you have an on site Engineer, you can get changes done quickly without putting in tickets to a service desk. You can work on crazy customizations without having to pay a fee from your provider.


Hosted VOIP Cons:

You are at the mercy of the provider. They own the call processing component. If you want a feature they don't offer you are out of luck. Engineers you work with can vary in skill. If they have a outage in the data center you are stuck like chuck. One I worked with had a broadcast storm. Calls for 250 clients didn't go to the PSTN. The Hosted call center went down. We also had several voicemail outages during my time there. If the company goes out of business guess what? Who are you going to be getting your call processing from. Costs can sneak up on you. A company with high turnover MACs can add up quickly. Companies with expected high turn over like call centers should be very careful when choosing a hosted solution.


On Site equipment Cons:
Cost, Cost, Cost. There are huge upfront costs associated with getting your own telephony on site. Cisco, Avaya, Nortel, can be pricey. You will have to pay for installation, and depending on the size of your company you will have to pay for on site employees, or you have to contract it out to a provider. What I have also seen is that some other network folks inherit it, and they just want to stay as far away from the phones as they can. Something we take for granted like phones are a nightmare to people who keep the VOIP packets flowing.


At the end of the day you have to decide what your company can afford and your needs. I like both solutions, and I've seen both of them fail when IT managers didn't ask the right questions about what they were purchasing and leverage there needs against them.

Tuesday, January 18, 2011

When poop hits the fan!!!!!





So we talk about this a lot on the boards and in real life. So how do we handle this sort of event. You get a call that XXXX is down and nobody has internet or phone access across several sites? Where do you begin? Do you hit the sweat button? Is this your time to take a coffee or smoke break to get mentally ready to get in the game? Some people like this part of the job more than others. I just happen to be a guy that likes to deal with outages. I have several thought the day ranging from 1 user has no access to my entire company can’t make calls. I’m going to shed some insight on a method to the madness of how you can handle the call, excel, and be the envy of your peers. I also want to hear from you, you might know something better and we can all learn something new.





To me I approach all outages the same. Before I jump into gear, I ask preliminary questions?



1. What is the problem?

2. When did it happen?

3. How many users are affected?

4. Any changes to your system recently?





Ok let’s be honest, on some of these questions nobody is going to be completely honest with you most of the time. I’m usually talking to another IT guy that was probably messing around and doesn’t want to come clean, and there are several stories with that regarding call centers and auto attendants that I will explain one day, But this gives us a place to start. Next I get access to the network in whatever place is left. Sometimes it’s a terminal server if it’s a complete outage, sometimes I can VPN into another site. Once I’m in I get my tools ready and fire most of them up. Usually SecureCRT, Notepad++ Kiwi Cat Tools, Wireshark, Kiwi Syslog, and a command prompt for pings. Next I get access to whatever devices seem to be the problem or the closest I can get to it. Now comes my initial play book. SHOW Commands regarding the particular technology and checking the log (I’m also checking for last login and changes made by). 75-95 percent of the time the problem is right there if I go up the OSI model. Other times debugs will be needed. That’s where the syslog sever and other tools come in. This Is now the time when people are starting to ask questions. The usual answer of I’m running debugs looking for any errors is enough to back up most people I have dealt with. Other times the VP of technology or whatever are on the phone and they start asking questions, usally they dont know what the hell they are talking about, One enterprise architect told me that he could reach a 192.168 network from his connection at home:) I usually tell them give me a min I’m going to put you on hold, while I gather more info. If they become too problematic I can get someone else in to run damage control cause there asking for updates every 5 min taking time away to solve the problem. If I still don’t have it up after 30-45 min and it’s a global outage I will reboot whatever device it is. That has more success rate than we like to give it credit to for smart IT guys the old raytheon reset is below us. Then after a hour or so it’s time to escalate to someone smarter than me. Sometimes it’s to cisco, other times its to another engineer in the office who has worked with that technology. In the end we solve some, others kick out arse, but it’s how we all learn.



Thursday, January 13, 2011

My CCIE rack and some company in DC



Over the past few weeks things have been chaotic for me. New job, and other personal endeavors have taken up most of my time (I'm about to get engaged). I just had to post this last encounter as it was too odd. Earlier this week I got a call around 4:45 pm about a network that keeps going down. We had some systems people in there working on Exchange, citrix and netapp. They had been rebooting their switches and the network would come back up and the switches would go haywire again. Anyways it gets so bad that someone had to go. I figured it was on my way home and I'll just unplug some cables break the loop and get back to normal life. Well 7 hours later I was still there with no resolution. First off we didn't have management to the switches and they just appeared to have some weird behaviors. So I'm getting hell from the switches, the owner of the company, and my girlfriend so how do we solve this problem. Back at the job there were no switches that met the needs of the customer, but wait...I have a couple of switches in my rack back home. I spoke about the switches I had and management agreed to put them in place the next day. Came home saved my configs from them, erased them, and got them ready. Plugged them in and the network is pumping out packets like a champ. What have we learned from this? Site surveys mean everything, make sure you have access to the entire environment your about to change.