Articles to learn more about Advanced IT Services | OneNeck https://www.oneneck.com Thu, 06 Jun 2024 17:44:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 Incident Response Planning: Defend Against Cyberattacks https://www.oneneck.com/blog/incident-response-planning-a-critical-part-in-defending-against-cyberattacks/ Tue, 28 Mar 2023 15:50:41 +0000 https://www.oneneck.com/?p=3807 How Incident Response Planning Helps Contain Cyberattacks Despite businesses pouring resources into cybersecurity, breaches aren’t going away — or even slowing down. In 2022 alone, companies reported a near-record 1802 breaches, affecting 422.1 million people. Unfortunately, it’s not a matter of if your data will be breached, it’s a matter of when. Incident response planning […]]]>

How Incident Response Planning Helps Contain Cyberattacks

Despite businesses pouring resources into cybersecurity, breaches aren’t going away — or even slowing down. In 2022 alone, companies reported a near-record 1802 breaches, affecting 422.1 million people.

Unfortunately, it’s not a matter of if your data will be breached, it’s a matter of when.

Incident response planning helps your organization prepare for security incidents by outlining objectives, processes, and evaluation criteria your team can follow every step of the way. Your organization’s custom plan will guide you through the aftermath of a breach, assist in recovery, and help you fix the vulnerabilities that led to the incident.

What is incident response planning?

Incident response planning means taking the time before a breach occurs to write down the actions you’ll take after it occurs. The Cybersecurity and Infrastructure Security Agency (CISA) describes an Incident Response Plan as a written strategy, approved by your organization’s senior leadership, that guides your organization before, during, and after a confirmed or suspected security breach.

Why do you need an incident response plan?

Once a breach happens, time is of the essence. You need to stop the attack, minimize the damage, and fix the problem quickly so that you can return to work. That’s why pre-planning is essential. By determining your incident response plan before a breach occurs, you’ll know which employees are responsible for which actions, when, and how they will complete them — speeding up your recovery time.

Having a written incident response plan and securing leadership approval tells your team exactly what to do and who will do it.

How do you write an incident response plan?

Starting your incident response plan with a blank page can be difficult. Instead, begin with the guidance the National Institute of Standards and Technology (NIST) issued for computer security incident response. The NIST Special Publication 800-61, Rev. 2 provides a framework for creating your own plan.

Along with NIST’s guidance, customize your incident response plan for your organization by assessing your:

  • assets and their level of risk
  • priorities
  • potential vulnerabilities
  • communication methods
  • incident response team members
  • distribution of responsibilities

And, of course, once you write your plan, you’ll need to train your team members so they can effectively communicate and mitigate when the next incident occurs.

The 6 Incident Response Phases

At a minimum, your plan should cover these phases of incident response and recovery:

  • Incident Response Planning Phase 1: Prepare
    During the first phase of incident response, ensure that your employees understand their roles and the steps they must take to respond. Practice your response procedures with exercises designed to simulate a breach. You’ll also use this phase to determine how you’ll identify breaches through testing, logs, alerts, or other procedures.
  • Incident Response Planning Phase 2: Identify
    You can receive an immediate alert that an attack may be taking place by implementing a monitoring tool or partnering an MDR provider. You may also be notified by receiving communication from another organization, law enforcement, or a customer. Once your team has evaluated the alert and determined that an attack is taking place, you’ll kick off the remainder of the incident response phases.
  • Incident Response Planning Phase 3: Control and Contain
    You know you have a breach; now it’s time to do something about it. Prevent further damage by isolating the network segment or infected servers. Document exactly what happened and the extent of the damage. If possible, preserve forensic data so you can analyze it in the Review phase.
  • Incident Response Planning Phase 4: Resolve
    Fix the vulnerability that caused the breach by removing malware, hardening and patching systems, and applying software updates.
  • Incident Response Planning Phase 5: Recover
    Return the isolated systems to regular operation and restore normal business processes.
  • Incident Response Planning Phase 6: Review
    Gather the incident and the forensic evidence you documented for the incident response team. Analyze the breach, and the team’s response, and discuss the lessons learned from the process. Revise your incident response plan based on what worked and what didn’t so you’re fully prepared for the next incident.

Protect Your Network with Incident Response Planning from OneNeck

Incident response planning can make the difference between a quick recovery from a data breach and a long, painful one. That’s why OneNeck’s security experts can work with you to prepare an incident response plan and recover from breaches faster — so you can get back to business.

OneNeck has your back. Read more about our incident response services here.

]]>
Leverage Collaboration Services to Empower Your Team https://www.oneneck.com/blog/leverage-collaboration-services-to-empower-your-team/ Tue, 04 Oct 2022 16:50:46 +0000 https://www.oneneck.com/?p=3062 If you were to survey your employees about aspects of their jobs that could be improved, you’d likely find better collaboration tools and meetings at the top of the list. In fact, in a PwC survey, 73% of respondents said they know of systems that would help them produce higher quality work. While employees don’t […]]]>

If you were to survey your employees about aspects of their jobs that could be improved, you’d likely find better collaboration tools and meetings at the top of the list. In fact, in a PwC survey, 73% of respondents said they know of systems that would help them produce higher quality work.

While employees don’t always think executives listen to their concerns, that’s not often the source of collaboration problems. As your company grows, you’ve had to assemble your tech stack on the fly — trying your best to provide the tools employees need while staying focused on the company’s larger goals.

But eventually, you get to the point where your tech stack starts having a cumulative effect on productivity, whether for better or worse. In this article, we’ll consider some factors that can hinder or help your utilization of collaboration tools.

Under-utilization Leads to Fragmentation

The problem with collaboration tools not being used to their full potential is that it can lead to fragmentation. When employees don’t have the same tools, it becomes difficult for them to communicate and collaborate effectively. In some cases, this can even lead to employees working on different parts of the same project without realizing it.

Nowadays, there are many collaboration services available that can help your team work more effectively. These services can provide a common platform for employees to share files, communicate, and collaborate on projects. By leveraging a collaboration service, you can ensure that your team is using the same tools and is working together towards a common goal.

Team looking at different collaboration tools.
Choosing the right collaboration tools starts with understanding employee needs.

Choosing a collaboration service isn’t as simple as using a product the technical team finds appealing. It’s crucial to leverage data, employee input, and company goals to ensure you’re committing to the right platform.

  1. Define your needs. To select the ideal collaboration service, you must first decide what you want it to do for your team. During the selection process, employee feedback is critical. Is executive opinion aligned with what employees actually need? From there, you can ask questions about your technical needs. Are you looking for something that will allow employees to share files and communicate easily, or are you looking for a service that facilitates meetings and client communication? You may also need a service that provides an all-in-one solution. Getting clarity on your actual needs will help you use your budget more effectively.
  2. Research and test different services. There are many services available today, so it’s important to find one that fits your needs. In your previous analysis, you may discover that your team only lacks a rock-solid communication app. With that knowledge, you may decide to use a niche app that does the job superbly. However, it’s good to keep in mind that the biggest collaboration boosts will come from tools that allow you to do many jobs within the same ecosystem. So, while your needs today may not be as intensive, you may opt to choose your tools planning for future growth. Some popular communication apps include Microsoft Teams, Slack, and Zoom.
  3. Deploy the collaboration service. Often, deployment can be the most challenging step since it requires training, technical capabilities, and administrative management. A solid transition plan will ensure that collaboration services are deployed uniformly across departments. Additionally, instituting a regular training program will help employees make the switch faster, increasing adoption rates.

Implementing new technology can be a phase that employees dread or are excited about. Much of their feelings will depend on the groundwork you do to get buy-in from your team. You can generate excitement by including your team in the decision-making process — clearly articulating why you’re making the change and providing training. When your team is excited about the change, you’ll have the momentum you need for successful integration.

Facilitate Deployment with a Collaboration Services Partner

Many businesses hit a roadblock when they get to deployment. Their team is excited about migrating to tools that provide a holistic collaboration experience. But they lack the technical expertise and migration experience to get started. If you feel stymied by the prospect of migrating your tools, we can help.

OneNeck offers managed collaboration services that give you the tools you need without the technical headache. We partner with companies like Cisco and Microsoft to provide complete solutions that cover the A to Z of your collaboration needs. And if you’re looking for something more streamlined, we can adapt our solutions to fit your needs. Contact us to learn more.

]]>
Simplify Your Computing With Hyperconverged Infrastructure https://www.oneneck.com/blog/advanced-services/simplify-your-computing-with-hyperconverged-infrastructure/ Mon, 08 Nov 2021 23:25:00 +0000 https://www.oneneck.com/blog/advanced-services-simplify-your-computing-with-hyperconverged-infrastructure/ Scalability, flexibility and cost-efficiency are all hallmarks of a high-performing data center. However, with scale comes certain challenges. For example, organizations must decide which network architecture will best suit their business needs. Cloud computing offers many advantages for those who want to scale but can be expensive and fall short in the security department. Traditional […]]]>

Scalability, flexibility and cost-efficiency are all hallmarks of a high-performing data center. However, with scale comes certain challenges. For example, organizations must decide which network architecture will best suit their business needs. Cloud computing offers many advantages for those who want to scale but can be expensive and fall short in the security department.

Traditional three-tier architectures offer the advantage of scaling resources based on actual needs. IT teams can choose to add on to their storage or computing power while leaving networking as is. However, these architectures are often vendor locked, which makes it expensive to upgrade. And vendors may require customers to purchase full racks for every upgrade, far exceeding their needs.

An alternative is hyperconverged infrastructure (HCI). This architecture simplifies data center construction and streamlines purchasing. How does it accomplish this?

Explaining Hyperconverged Infrastructure

Hyperconverged infrastructure simplifies data centers by allowing businesses to use standard x86 servers for all of their computation needs. X86 servers are the industry standard and can be easily purchased when businesses need to extend their hardware’s capacity. How does HCI make this possible?

HCI uses software virtualization to abstract and pool resources dynamically. By leveraging compute, storage and networking virtualization, it’s possible to access resources across multiple servers as if from a single device — removing the constriction of what individual server racks can handle. For example, you could be using the processing power from two devices while using the storage capacity of 3, 10, or more servers. None of the computing power has to be located on the same rack.

Another advantage of hyperconverged infrastructure is administration. Previously, you may have needed separate teams to manage storage, compute and networking servers. Since HCI leverages the same devices for all of these functions, it’s possible to use the same team for everything. Simplified administration cuts costs and breaks down departmental silos.

A giant mess of electronics and cablesSimplifying your infrastructure from the traditional three-tier architecture has additional benefits, such as the ability to connect to the cloud. Whether your organization aims to increase security, computing capacity or something else, HCI reduces the need to manage disparate systems when moving capacity to the cloud. HCI can streamline the following cloud adaptations:

  • Private cloud. With hyperconverged infrastructure, your resources will be pooled together. This means that to achieve private cloud capabilities, all that’s needed is to ensure that compute resources are available to those who need them and establish processes that control resource allocation.
  • Hybrid cloud. Traditional server architectures might require a web of connections to ensure all three components of your architecture match up with the cloud. In contrast, HCI simplifies this to relatively few connections, as your network is already pooling resources into a single repository.
  • Public cloud. Simple infrastructure means that migration is also easier. Cloud providers will be more compatible with your hardware, and this will streamline connecting your business to the cloud.

More and more businesses are leveraging cloud resources to augment their private infrastructure. By implementing HCI in your business, you’ll be better prepared for the cloud.

Use HCI To Do More With Your Computing Hardware

Change for the sake of change isn’t progress. Businesses must ensure that upgrades, including moving to HCI, lead to improvement. Below are five ways hyperconverged infrastructure improves your business:

  1. Reduce total cost of ownership. HCI lowers costs in several ways, including better resource allocation, simplified upgrading and extending of hardware and elimination of vendor lockdown.
  2. Simplify deployment of hardware. HCI helps businesses unify the type of hardware they use since they only need x86 servers to run the infrastructure. This makes supply chain management and deployment easier.
  3. Increase scalability and flexibility. Since HCI pools resources, when organizations need to scale, it becomes a simple matter of adding servers to their infrastructure pool. Additionally, teams can use as little or as much computing power as they need, leaving the rest available to whoever needs it.
  4. Improve security. With HCI, organizations can gain cloud-like infrastructure while maintaining full control over their data.
  5. Balance your usage. With traditional server infrastructure, IT must allocate resources according to projected usage. These projections aren’t always accurate, and as soon as teams exceed projections, upgrades are needed. HCI allows IT teams to allocate resources dynamically based on current needs.

The advantages of hyperconverged infrastructure are compelling enough to make any IT team want to act. However, converting your hardware is more involved than buying a few servers. Our team is experienced in helping businesses transition to HCI infrastructure. Contact us to learn more.

]]>
3 Reasons to Get Excited About Next-gen HCI https://www.oneneck.com/blog/advanced-services/hpe-hci-2.0/ Sat, 02 Oct 2021 23:45:00 +0000 https://www.oneneck.com/blog/advanced-services-hpe-hci-2-0/ The uptake of hyperconverged infrastructure (HCI) over the last decade has been strong, for good reasons: the technology is simple to manage and makes it easier to administer virtual desktops, apps and data. Businesses enjoy being able to scale infrastructure performance and storage capacity easily, by just slotting in a new node. However, while HCI […]]]>

The uptake of hyperconverged infrastructure (HCI) over the last decade has been strong, for good reasons: the technology is simple to manage and makes it easier to administer virtual desktops, apps and data. Businesses enjoy being able to scale infrastructure performance and storage capacity easily, by just slotting in a new node.

However, while HCI is suited to virtual desktops and lower-tier, less critical workloads, behind its breakthrough ease-of-use lie architectural limitations that can’t support business-critical apps and mixed workloads without adding risk. It forces a trade-off between performance and simplicity, leading to system lags and even crashes. Businesses find themselves balancing performance, availability and cost on a tightrope of uncertainty.

There is a need for a better solution that delivers the HCI experience of unified management and VM-centric operations with higher availability, faster performance and flexible scaling. The good news is that HPE now offers  that solution. It’s more powerful, more cost effective, and fixes bad performance. It’s HCI without boundaries.

what-is-hciWhat’s the excitement about?

HCI 2.0 from HPE is an intelligent platform that disaggregates compute and storage, and integrates hyperconverged control for simple management on a flexible architecture. Powered with HPE InfoSight, it gives enterprises ultimate simplicity for their virtualized environments with fast app performance, always-on data resilience, and resource efficiency.

1. No more trade-offs

HPE HCI 2.0 scales compute and storage independently, so no resources are wasted. You don’t have to buy both when you only need one but, more importantly, you can scale your needs to ensure that your system never buckles under pressure and business continuity is ensured.

2. Intelligently simple

HCI 2.0 removes complexity in IT operations and provides greater stability. Predictive support automation and problem prevention mean no more time-wasting on mundane, routine tasks and firefighting. Problem-solving is left to InfoSight’s predictive analytics and expert to support services. It’s simple to deploy: Configuration is automated, completed in 15 minutes1, not hours or days and VM-centric data services and resource management make it simple to manage.

3. Absolutely resilient

HPE HCI 2.0 offers 99.9999% storage availability2, and there is no single point of failure. It has the ability to tolerate three simultaneous drive failures whilst enabling faster and more frequent backups with application-consistent snapshots and advanced replication.

We’re here to help.

hpe-1024x1024As an HPE Platinum Partner, we have extensive experience helping our customers accelerate innovation and time to market by consolidating their IT with an intelligent hyperconverged infrastructure. If you’re interested in learning more about HPE’s HCI 2.0, our HCI experts are here to help.

]]>
Protecting Your Backups from Ransomware https://www.oneneck.com/blog/security/are-your-backups-protected-from-ransomware/ Mon, 31 Aug 2020 19:45:00 +0000 https://www.oneneck.com/blog/security-are-your-backups-protected-from-ransomware/ Ransomware is at our doorstep. We cannot ignore it any longer or think we are not a target. In recent years at OneNeck, we have seen a significant upward trend of ransomware attacks. And even more troubling is in the last year, we have seen bad actors getting smarter, and they are now targeting your […]]]>

Ransomware is at our doorstep. We cannot ignore it any longer or think we are not a target. In recent years at OneNeck, we have seen a significant upward trend of ransomware attacks. And even more troubling is in the last year, we have seen bad actors getting smarter, and they are now targeting your backup server and backup data to prevent you from recovering from the attack. As ZDNet stated, “The number of ransomware strains targeting NAS and backup storage devices is growing, with users ‘unprepared’ for the threat.”

In response, the backup industry has replied with some key recommendations you can implement to make it more difficult for those bad actors to be successful. Below are some of the recommendations being made by the industry and what we at OneNeck have seen be successful in slowing and reducing the risk of ransomware infecting your backup infrastructure:

  1. Remove your backup servers from the domain.

The goal of this recommendation is to prevent a compromised domain account with privileged access from leap frogging from server to server until they gain full management access of your backup infrastructure. This is a great first step and depending on your backup infrastructure, it could be sufficient to keep those bad actors from gaining access to that data.

  1. Implement multi-factor authentication (MFA) on your backup servers.

Preventing the bad actors from accessing your backup management software is the goal of this recommendation. Removing all other management consoles from admin desktops and using a dedicated backup management server with multi-factor authentication makes it more difficult for bad actors to gain access to your backup infrastructure.

  1. Create an isolated network and control who can access it.

If your backup servers and repositories are on the same network as your production servers and data, it is not difficult for the bad actors to jump from a compromised server and reach your backup infrastructure via the network. By creating a separate network, it makes it easier to create access control lists and prevent certain types of traffic from reaching your backup infrastructure. You can also lock down which devices have access to that separated network as well, making it more difficult for the bad actors to gain access and wreak havoc.

  1. Send a third copy of your backup data into object storage.

Object storage changes the way the data is written and can be rewritten in your backup repositories. By the nature of ransomware, it wants to read and overwrite or append to the original file to encrypt it. Object storage by design only allows create and delete operations thus making it more difficult for ransomware to encrypt an object store.

  1. Implement an air-gapped backup repository.

This is the panacea to help keep your backup environment protected but does require the most cost and complexity. The goal of an air-gapped backup repository is to keep the backup copy and infrastructure offline from the production network, it is only online for a short period to pull the latest data copy and scan it for ransomware. It takes physical access to manage the air-gapped backup equipment which is extremely difficult for those bad actors.

To learn more about air-gapped backups, watch this webinar.

OneNeck would be happy to open the conversation to discuss how these preventative measures can be implemented in your environment to better protect your backup infrastructure. OneNeck can also help with an offensive approach to ransomware and business continuity. Backups always provide a good defense, but any good game plan has both an offensive and defensive component.

Keep Moving Forward. We Got Your Back(up).

]]>
Do You Know Who’s Responsible for Your Office 365 Data? https://www.oneneck.com/blog/managed-services/do-you-know-whos-responsible-for-your-office-365-data/ Fri, 24 Apr 2020 19:00:00 +0000 https://www.oneneck.com/blog/managed-services-do-you-know-whos-responsible-for-your-office-365-data/ No doubt about it, for Microsoft Office 365 (O365), business is booming. At their Q3 FY19 earnings call, Microsoft CEO Satya Nadella said that Office 365 is now used by 180 million monthly active users and growing at more than 4 million users per month – and that was before the recent surge in the […]]]>

No doubt about it, for Microsoft Office 365 (O365), business is booming. At their Q3 FY19 earnings call, Microsoft CEO Satya Nadella said that Office 365 is now used by 180 million monthly active users and growing at more than 4 million users per month – and that was before the recent surge in the demand in collaboration tools!

But, for many organizations utilizing O365, there is gray area on whether or not Microsoft’s native tools support backup and recovery, and if so, to what extent. The confusion boils down to the backup that Microsoft provides and what the customer assumes they’re getting are often different.

So, what does Microsoft cover? Microsoft O365 comes with what’s known as geo redundancy, which is not the same thing as backup. Backup is when a historical copy of data is made and then stored in another location. But a critical component of backup is having direct access to and control over that backup so if data is lost, accidentally deleted or maliciously attacked, you can quickly access and recover it. But with geo redundancy, it protects against site or hardware failure, so if there is an infrastructure crash or outage, users remain productive and rarely aware there’s been a problem.

The bottom line and key takeaway here:

  • MICROSOFT is responsible for the uptime of O365.
  • The CUSTOMER is responsible for the protection and long-term retention of their O365 data.

The Shared Responsibility Model is further detailed in this Veeam graphic…

MS_office 365 Shared Responsibility Model

Clearly, it’s important to recognize that in the case of O365, you need a backup plan. While Microsoft has a solid reputation for high availability of O365 infrastructure and applications, there are numerous things that can happen, and likely will happen, that can open your organization to risk:

  • Data loss and security breaches: Data can be lost from internal and external sources, ranging from accidental deletion, ex-employee actions or even external breaches (e.g., malware and ransomware).
  • Retention and compliance: While Microsoft offers retention policies to hold your O365 data for a longer period of time, these are not available to all licensing types, and preserving your data for a longer time is different than backups. A third-party backup strategy is a must to meet an offsite (outside of the O365 ecosystem) copy and maintain the control you would expect in a restore situation.
  • Lack of control in a hybrid world: With today’s SaaS-driven environment, visibility and data control is a challenge that backup can help address.

Icon_BlogThere’s no doubt that Microsoft O365 is a great solution that brings increased productivity to the modern workforce. But ensuring the access and control of your O365 data is imperative in avoiding risk. If you’d like to learn more about why you need a backup solution for O365, check out this informative eBook from our partner Veeam: 6 Critical Reasons for Office 365 Backup.

And if you’d like to talk with one of our backup and recovery experts, we’re here to help.

]]>
Microsoft Renames the Office 365 SKUs https://www.oneneck.com/blog/microsoft-renames-the-office-365-skus/ Tue, 21 Apr 2020 23:57:00 +0000 https://www.oneneck.com/blog/microsoft-renames-the-office-365-skus/ As Esther Dyson so eloquently put it, “Change means that what was before wasn’t perfect.” And as we all know in technology, change is a given. Today brings more change. As of today, Microsoft is changing the names of Office 365 SMB SKUs and making way for Microsoft 365. Their reasoning was that customers have had […]]]>

As Esther Dyson so eloquently put it, “Change means that what was before wasn’t perfect.” And as we all know in technology, change is a given.

Today brings more change. As of today, Microsoft is changing the names of Office 365 SMB SKUs and making way for Microsoft 365.

Their reasoning was that customers have had difficulty deciding which SKU is right for them. Their SKUs were:

  • Mid-priced SKU is called Office 365 Business Premium.
  • The premium SKU is called Microsoft 365 Business.
  • The Office 365 Business Essentials SKU has more cloud services value than the Office 365 Business

We get it. That’s confusing. So, here’s a quick glance at what’s changing.

2020-04-21_MS Graphic

Note that there are no naming changes for the Office 365 Enterprise or Microsoft 365 Enterprise SKUs. It is also important to note that subscription prices, product features, offer IDs and SKUs, and Office 365 Enterprise SKU names are not changing.

If you’d like to learn more about these changes, check out the Microsoft Office 365 SMB Naming Update Page. And as always, if you have any questions regarding Microsoft’s solutions, we are here to help.

Keep Moving Forward. We Got Your Back.

]]>
A Remote Workforce Powered by Partners Who Care https://www.oneneck.com/blog/a-remote-workforce-powered-by-partners-who-care/ Sat, 21 Mar 2020 02:58:00 +0000 https://www.oneneck.com/blog/a-remote-workforce-powered-by-partners-who-care/ We’re living in a very strange and uncertain time – uncharted waters one could say. Sequestered in the relative safety of our homes, unsure of what next week or even tomorrow might bring, business must still go on. But while we may be working in isolation, it’s still people that power the modern business. And […]]]>

We’re living in a very strange and uncertain time – uncharted waters one could say. Sequestered in the relative safety of our homes, unsure of what next week or even tomorrow might bring, business must still go on. But while we may be working in isolation, it’s still people that power the modern business. And in any business, the team is paramount. It’s where creativity, collaboration, and solutions all come together within a group to exceed the sum of its parts.

Now, rather than a bustling office, it’s the remote team that is the hub and the critical component of many a company’s success. So, ensuring that workforce is equipped to do their job and effectively collaborate has become the number one priority for many organizations.

A positive in all of the uncertainty has been that we’ve seen an overwhelming response from people and businesses offering to help. And at OneNeck, we’re fortunate to work in an industry and with partners that are equipped to help make an effective remote workforce a reality. Numerous offers have been extended from our manufacturer partners that give IT teams the tools to enable their remote workers to continue to securely collaborate with each other and drive their business forward, even from outside the office.

Whether it’s collaboration tools, security best practices, softphones, virtual desktops or even VPNs you need to enable your remote workforce, our partners have stepped up and are offering some great deals to help make it happen. In an effort to make them easier to find, we’ve assembled a list of these timely offers for you.

CHECK OUT THESE PARTNER OFFERS

And should you need any help understanding what your options are or how to scale your current technology to support your remote employees, we are here to help. Our engineers have, and continue to, work around the clock, helping our customers adapt to doing business from anywhere.

Keep moving forward. We got your back.

]]>
HCI – Driving Real Business Results https://www.oneneck.com/blog/hci-driving-real-business-results/ Thu, 19 Sep 2019 23:25:00 +0000 https://www.oneneck.com/blog/hci-driving-real-business-results/ All businesses have to transform and adapt to do business in an increasingly digital world. But to transform, they must first address the foundation that their business sits on, making converged infrastructure (CI) and hyperconverged infrastructure (HCI) a great fit that enables efficiency and scalability on validated infrastructure.  Since 2012, HCI technology has offered even […]]]>

All businesses have to transform and adapt to do business in an increasingly digital world. But to transform, they must first address the foundation that their business sits on, making converged infrastructure (CI) and hyperconverged infrastructure (HCI) a great fit that enables efficiency and scalability on validated infrastructure. 

Since 2012, HCI technology has offered even greater hardware and workload consolidation than its predecessor, CI. HCI has accelerated IT transformation through its software-defined infrastructure approach that does not require the level of storage and server management expertise needed to utilize CI.

Where is HCI now?

Over the last nine years, HCI has been leveraged by a number of large organizations looking to modernize their data centers and to build out public and private and cloud infrastructure.

In a recent survey conducted by ESG, more than 98% of transformed companies said they are using either converged infrastructure (CI) or HCI, and are running 35% of their applications on either platform. Moreover, the global HCI market size is expected to grow from USD 4.1 billion in 2018 to USD 17.1 billion by 2023, at a Compound Annual Growth Rate (CAGR) of 32.9% during the forecast period. (Source)

These growth projections likely come as no surprise to organizations already using HCI. That is because most are seeing firsthand how HCI is supporting transformation and driving meaningful business value for their organizations.

Time is Money
Unlike the legacy approach, HCI is already engineered and validated prior to installation, so teams do not have to worry about spending time integrating components. This saves a significant amount of effort from IT management and staff, and frees up their time to work on more strategic and higher value projects.

Similarly, thanks to HCI’s consolidated interface that provides a comprehensive look at all IT components and significantly smaller hardware footprint, IT staff spend much less time monitoring components and allow for more reliable and consistent operations overall. 

In fact, according to the ESG survey, organizations utilizing CI/HCI spent 31% less time on routine system management.

Not only is time saved, many of these organizations have seen significant cost savings. A smaller hardware footprint requires less levels of management thus translating into decreased operational costs like labor, power and cooling, and more. IT management in HCI/CI organizations reported a 21% to 30% reduction in operational expenditures.

Agility and speed lead to better service and competitive advantage
HCI technology accelerates IT transformation through faster application deployment and completion of integration tasks in a greater speed than ever before. This also increases the chances of getting to market faster than the competition. 

Organizations using HCI/CI reported they were seven and half times more likely to complete most app deployments ahead of schedule, and were two and a half times as likely to be significant ahead of their competitors in time to market.

Through allowing for greater IT agility, HCI also creates a more cloud-like environment, enabling teams to provide IT as a service (ITaaS) experience to its users. This opens the door to even greater flexibility and faster response to business priorities, helping the organization work toward their digital transformation objectives. (Source)

These are just a few of the many benefits of large organizations have experienced integrating HCI technology into their businesses. The significant impact it has on saving time and costs are allowing IT to focus more resources and efforts into digitally transforming their organizations, and to contribute to greater strategic imperatives to the business.

———

Interested in learning more? Check out this informative report from ESG on the role of CI and HCI in IT transformation.


DOWNLOAD THE REPORT

]]>
Understanding Software-Defined WAN https://www.oneneck.com/blog/datacenter-colocation/understanding-software-defined-wan/ Tue, 18 Jun 2019 16:00:00 +0000 https://www.oneneck.com/blog/datacenter-colocation-understanding-software-defined-wan/ Bandwidth needs are skyrocketing. To meet the growing demand, there is a transition underfoot to move away from traditional the wide area network (WAN) to the software-defined wide area network (SD-WAN). According to IDC, SD-WAN technology is projected to exceed $6 billion in revenue by 2020. SD-WAN promises great change, and adoption is gaining traction, […]]]>

Bandwidth needs are skyrocketing. To meet the growing demand, there is a transition underfoot to move away from traditional the wide area network (WAN) to the software-defined wide area network (SD-WAN). According to IDC, SD-WAN technology is projected to exceed $6 billion in revenue by 2020. SD-WAN promises great change, and adoption is gaining traction, but understanding how you can deploy it to benefit your organization is still evolving for many organizations.

Software-Defined WAN Explained

Using virtualization and network overlays to deliver better connectivity, reduce complexity and lower overall costs, SD-WAN is an alternative approach to designing and deploying enterprise WANs. In a traditional WAN, local and corporate networks are connected via proprietary hardware or fixed circuits. SD-WAN moves that network into the cloud, using a software approach, adopting a more application-centric model rather than relying on the traditional hop-by-hop routing.

The goal is to simplify the WAN setup so that an administrator only needs to plug in a cable for the appliance to contact the central controller and receive the configuration. The aim is to eliminate dependency on private WAN technologies like MPLS, which are notorious for long provisioning times and expensive contracts.

There is a growing interest in SD-WAN as users increasingly access applications via the cloud, diminishing reliance of dedicated pipes to on-premises data centers. Even with its rising popularity, many larger enterprises are still reluctant to fully adopt the solution and are expected to deploy a hybrid WAN architecture.

While SD-WAN is still evolving, it is gaining traction in the marketplace because of these emerging advantages:

  • Improved Performance
    The latest SD-WAN technologies leverage end-to-end network visibility and feedback to improve transmission efficiency with minimal lag time. SD-WANs can identify the quickest path from source to destination in real time and re-route packets accordingly. Routing decisions are made based on data, such as latency and applied QoS policies.
  • Hard and Soft Cost Savings
    In a traditional WAN, hard costs often include the hardware, such as the routers. SD-WAN runs in the cloud and relies significantly less on physical hardware. SD-WAN also reduces soft costs by cutting down on the number of engineer hours required by easing WAN management.
  • Increased WAN Resilience
    SD-WAN increases WAN resilience; it proportionately aggregates capacity, making the bandwidth available to all applications. It’s also able to split traffic for a single application across multiple paths for improved throughput. This assures optimal packet delivery with multipathing and error correction.

Is SD-WAN Right for You?

SD-WAN’s most vital benefit is that its architecture is better suited for the demands of mobile and real-time applications, and most importantly, it’s often better at meeting the demands of the cloud. However, while SD-WAN can reduce the cost and complexity associated with the traditional WAN, enterprise IT departments will need to decide whether SD-WAN is an investment worth pursuing based on a variety of factors including:

  • Is your organization spending increasingly more time and overhead on connectivity on a consistent basis?
  • Is your WAN providing the desired resilience for anytime, anyplace computing?
  • Are you experiencing difficulty getting good performance on demanding applications with your existing WAN?
  • Is your WAN able to serve the needs of divergent applications from a performance, compliance and security standpoint?

If you answered in the affirmative to most or all of the above questions, then maybe it’s time to consider SD-WAN. If you are unsure if SD-WAN is the right fit for your organization, read our SD-WAN hype or reality eBook to gain more information.


SD-WAN - Hype or Reality?

]]>
Understand Data Before Migrating to the Cloud https://www.oneneck.com/blog/cloud/migrating-to-the-cloud-starts-with-understanding-data-and-strategy/ Tue, 25 Sep 2018 18:30:00 +0000 https://www.oneneck.com/blog/cloud-migrating-to-the-cloud-starts-with-understanding-data-and-strategy/ Most CIOs today are either considering moving to or are already in the process of migrating to the cloud, whether it’s public, private or a hybrid combination. But in order to strategically migrate to any cloud solution, applications and supporting data environments and services must be clearly defined. With the availability of cheap storage and […]]]>

Most CIOs today are either considering moving to or are already in the process of migrating to the cloud, whether it’s public, private or a hybrid combination. But in order to strategically migrate to any cloud solution, applications and supporting data environments and services must be clearly defined.

With the availability of cheap storage and the explosion of new data being created every day, companies are struggling with truly understanding their data environments. It’s not that they can’t find the right data, but they often THINK they have the right data when they actually don’t, as the real data may exist in multiple systems and only partially accurate as it relates to some other data or process. This is why the popularity of Information Lifecycle Management (ILM) practices has grown so dramatically, as the need to manage the flow of information system’s data across an organization is mounting in importance.

Up until a few years ago, most CIOs were content to let the data keep growing and buy more storage if necessary. But now with the push towards the cloud, CIOs are forced to bundle applications, data and services, as well as plan migrations. But in order to accomplish this, an organization’s data environment must be clearly defined, so that applications and services don’t break when the migrations occur.

While CIOs today are all in with the cloud, most do not know where to start or what to consider when defining their data environments.

In our experience in working with our customers here at OneNeck, there are at least five areas to consider.

  1. What are the characteristics of the data?
    Data typically has some relational correlation to other data. What are these relationships?  How do these data pieces interact with each other? Is the data time-sensitive? After a certain time-period, is the data no longer relevant? Is the data context sensitive? What is the size of the data? How fast is it growing? How fast can it be retired and/or deleted?
  1. What are the data entities?
    By entities, we mean groups of data and how they relate to other entities and applications. This is the big picture of data and how it relates and flows thru systems.
  1. Who owns the data, and who is the steward?
    A business person (outside of IT) typically owns the data, and the IT person who helps take care of the data is known as the data steward. Who are these people, and what are their responsibilities to the data? Who decides what data is needed? Who decides the processes that flow and interact with the data? Who maintains the systems that the data runs on? Who is responsible if the data is inaccurate, or unavailable?
  1. What is the data lifetime and criticality?
    How important is this data? How long is the data valid? How fast do we need it?
  1. What’s important to YOUR business?
    What are the critical pieces of data or systems that are important for your day-to-day business operations?

BobH-1In Summary…

At OneNeck, we get it. We see organizations struggling with these issues as they attempt to manage their data and consider migration paths to various cloud platforms. That’s where we come in and help. As part of our Advanced Services group, we can help you define YOUR data, giving you the starting point in a step-by-step approach to the cloud. Rome wasn’t built in a day, and neither is your cloud strategy. So, slow down and start with a clear view of your data, and you won’t regret it…

Interested in hearing about how my team has helped some of our clients get control of their data? Check out this eBook with real-world case studies.

2018-09-25_ebook-download


DOWNLOAD NOW

]]>
Future of Industrial Wireless Networks on the Factory Floor https://www.oneneck.com/blog/managed-services/future-wireless-factory-floor-network-solutions/ Tue, 28 Aug 2018 19:00:00 +0000 https://www.oneneck.com/blog/managed-services-future-wireless-factory-floor-network-solutions/ The challenges faced by today’s manufacturer are in many ways the same as they’ve been in the past: maintain the right amount of inventory, optimize production efficiency and never lose sight of quality. Yet addressing these challenges has become more tied to the underlying technology than ever before. Innovation and digital manufacturing are now made […]]]>

The challenges faced by today’s manufacturer are in many ways the same as they’ve been in the past: maintain the right amount of inventory, optimize production efficiency and never lose sight of quality. Yet addressing these challenges has become more tied to the underlying technology than ever before.

Innovation and digital manufacturing are now made possible with the infrastructure, which brings full network capability and wireless-enabled applications and systems that cuts costs while increasing productivity and output.

To keep up, manufacturers are updating their facilities and network infrastructure to take advantage of today’s industrial wireless network capabilities, and experiencing significant business benefits:

  • Uptime and productivity: In manufacturing, time is most definitely money, and when production is halted, thousands of dollars can be lost in a matter of minutes. By connecting the people running the line with the machines right on the floor, today’s wireless technologies can accelerate decision making and keep the workforce close to where the production is happening.
  • Cost reduction: Aside from the obvious cost savings of wireless over wired, there are other reasons a modern wireless network can impact the bottom line. Wireless enables faster time to market with increased communication efficiencies and reduced complexities.
  • Real-time decision making: Like any modern business, the faster decisions can be made in response to market shifts, the more quickly those decision impact the business. Wireless collaboration in a manufacturing facility not only increases collaboration, but drives a whole new level of visibility across the factory floor, giving employees the power to resolve issues faster. In addition, the impact that mobile devices can have, empowering employees throughout the plant to work where they’re needed most is changing the game for today’s manufacturer.

It’d be hard to dispute the impact that industrial wireless networking has had in manufacturing, but there are often still misconceptions around its reliability, security, bandwidth and latency. In this informative Cisco eBook, they explore some of these misconceptions and offer some tips for getting wireless right in today’s modern manufacturing.

]]>
Modern IT Collaboration & Unified Communications Solutions https://www.oneneck.com/blog/managed-services/move-to-modern-collaboration-unified-communications-solutions/ Tue, 17 Apr 2018 18:05:00 +0000 https://www.oneneck.com/blog/managed-services-move-to-modern-collaboration-unified-communications-solutions/ According to Cisco, modernizing the workplace through digital technologies is one of the defining movements of our time. The disruptive impact of digital transformation is affecting every industry as organizations massively scale up their initiatives. Unified communications (UC) is critical to digital transformation because it provides employees, customers and partners with a wide range of […]]]>

According to Cisco, modernizing the workplace through digital technologies is one of the defining movements of our time. The disruptive impact of digital transformation is affecting every industry as organizations massively scale up their initiatives.

Unified communications (UC) is critical to digital transformation because it provides employees, customers and partners with a wide range of modern collaboration tools and technologies empowering them to message, meet, and call instantly; strengthening relationships and increasing productivity across the enterprise.

By utilizing the right UC solutions, organizations will increase productivity and improve customer service. In addition, they will be able to:

  • Modernize their workplaces to meet the needs of a changing workforce, thus helping improve employee recruitment and retention.
  • Expand their potential talent pool by supporting enhanced and secure mobility, collaboration, device ubiquity and geographic diversity.
  • Reduce total cost of ownership through lower facility and travel costs

We know that digital transformation is the future. It’s imperative that IT leaders embrace and lead the coming change. But how do you get there? How do you adapt your current UC technology to satisfy the needs of today’s more mobile and distributed workforce? Cisco recommends the following 10 considerations to keep in mind when moving to more modern collaboration and UC solutions:

  1. Take inventory of what you have: This goes beyond technology. Make sure you also understand your corporate culture and how it works. Collaboration and UC are enabling technologies; how they are used, and by whom within your workforce will determine their success.
  2. Set reasonable goals: Determine if it makes sense to work with a specific team or department and develop a proof of concept. Remember that each organization will have its own challenges and expectations, and make sure that you have the resources in place to deliver upon the expectations you set.
  3. Involve users and line-of-business managers: Collaboration and unified communications only work if they provide users with an experience that they value. Don’t just make technology available to your users. Involve them in the process from beginning to end, including providing the training needed to facilitate adoption.
  4. Focus on ease of use: This gets back to the previous point. If the user experience is compromised, or if the deployment is time consuming and difficult to manage for IT, then the technology will not be used to the extent you may desire.
  5. Quality counts: This is a new world. IT consumerization means users expect a glitch-free, high-quality experience. If you are offering collaboration tools such as VoIP and video, choose a supplier that understands these technologies and can deliver the underlying infrastructure required. Also, make sure your technology vendor has longevity in meeting the needs of the most demanding enterprise customers and service providers. In the UC market, there have been too many instances of failing providers leaving customers without future support.
  6. Ensure a great mobile experience: Mobility is one of the most important considerations in empowering today’s collaborative workplace. Your mobile connectivity and collaboration tools must assure a great mobile experience no matter which device is being used at whatever time and from whatever location. And remember, mobility is not just for your workers
  7. Modernize your infrastructure: Now is the time for all companies, large and small, to root out those last pockets of TDM and ensure a fully digital experience for the workforce. Don’t allow your organization to be stuck with an obsolete technology when there are far better and more cost-effective alternatives available.
  8. Embrace the cloud: This is not to say that you should use cloud models for all of your collaboration and UC requirements. But you should work with a technology provider that gives you flexibility to use cloud models whenever and wherever you deem them appropriate. Consider providers that offer commercial solutions that allow you the flexibility to deploy some services on premises (where it makes sense) and to consume others from the cloud. And no matter what approach you take, make sure the result is a consistent and simple-to-use experience for end users and IT.
  9. Embed collaboration into your line of business applications: To ensure that your organization maximizes the business value of collaboration, it is important to embed communication and collaboration into your applications and business practices. This can best be achieved using application program interfaces (APIs) and software development kits (SDKs).
  1. Choose an experienced technology partner: Collaboration and UC solutions together represent one of the most important investments your company will make to ensure a successful future. You can’t afford to go wrong, and you don’t want to be in a situation where you have to swap out partners or technologies down the road. It is important to choose a technology vendor that is a leader in providing all elements of your collaboration and UC solution from end to end. This way, you will ensure a successful deployment for today and a clear path for the future.

“Cisco Essential Guide to Workplace Modernization”

Ready to modernize your UC solution? OneNeck® IT Solutions, with our partner Cisco, is here to provide world-class collaborative services that bring people together and enable them to achieve extraordinary things. Contact us today to learn more.

]]>
Is Intent-Based Networking the Next Big Thing? https://www.oneneck.com/blog/datacenter-colocation/intent-based-networking-the-next-big-thing/ Wed, 04 Apr 2018 20:43:00 +0000 https://www.oneneck.com/blog/datacenter-colocation-intent-based-networking-the-next-big-thing/ Intent-based networking is being called the “next big thing” in networking. Building on the power of machine learning, software-defined networking (SDN) and advanced automation, intent-based networking systems (IBNS) have the potential to improve the way administrators monitor and respond to network events and conditions. “Traditionally, network administrators manually translate business policies into network device configurations, […]]]>

Intent-based networking is being called the “next big thing” in networking. Building on the power of machine learning, software-defined networking (SDN) and advanced automation, intent-based networking systems (IBNS) have the potential to improve the way administrators monitor and respond to network events and conditions.

“Traditionally, network administrators manually translate business policies into network device configurations, a time-intensive and error-prone activity,” Patrick Hubbard wrote in Cloud Expo Journal. With IBNS, Hubbard added, there are still all of the vital tenants of traditional systems, but now you have “the addition of observability, autonomous execution access, control policy, and a critical layer of machine-learning capabilities that allow automatic decision-making based on the analysis of observed network behavior.”

With the rise of the Internet of Things (IoT) and an array of mobile devices with access to enterprise networks, the ability to have the full visibility and accountability of connected devices is a must to ensure the security of both the network and data.

Let’s Dig a Little Deeper into IBNS…

In a blog authored by Gartner analyst Andrew Lerner, IBNS is described as a “lifecycle management software” that “can improve network availability and agility.” Specifically, it allows administrators to design networks to behave in a prescribed way — including dictating what is and isn’t monitored.

With an IBNS platform, for example, administrators can see in real-time what devices are connected to the network and evaluate them for security — driving faster decisions about which devices should stay or go. In addition to improving security, IBNS also has the potential to increase efficiency by reducing time spent on tedious administrative tasks — freeing IT to focus on more innovative, business-impacting initiatives, while providing a more accurate view of network activity.

IBNS is Gaining Traction

While IBNS in concept has been part of the networking management discussion for a few years, big players like Cisco are now taking more proactive steps to lead the IBNS transformation with their intent-based networking strategy, which they’ve dubbed “The Network. Intuitive.”

“All IT administrators want better access control, massive scalability, security and multi-vendor device management,” Will Townsend writes in Forbes. With IBNS, administrators will be able to manage networks more efficiently, especially at a time when hundreds of new devices are requesting access. Through artificial intelligence and machine learning capabilities, the job of the administrator will become faster and easier, even as networks and devices grow more complex.”

While the technology surrounding IBNS is still in its infancy, Cisco’s leadership in this space will have an impact in moving it in a forward direction. Andrew Lerner carefully asserts that it won’t be mainstream before 2020.

But if it is coming, the question becomes, should I be preparing for it?

IBNS builds on advantages driven by automation, SDN and orchestration. So, you can start by integrating more machine learning and data analytics into your network. Equally important is a commitment to focus on learning and understanding the intricacies of the approach, to ensure it can work reliably in your environment.

In addition, remember that you’re not in it alone. Partners like OneNeck are here to help guide you through these major technology transitions. Much like converged systems, next-gen security, cloud or even SDN have been evolving the way we operate and protect our infrastructures, so intent-based networking has the potential to be that next big disruptor. So, stay tuned, and we promise that when the time comes, our experts will be here to walk alongside you, guiding you through the “next big thing”…

 

]]>
A Closer Look at Mobility in the Construction Industry https://www.oneneck.com/blog/mobility-in-the-construction-industry/ Thu, 21 Dec 2017 18:00:00 +0000 https://www.oneneck.com/blog/mobility-in-the-construction-industry/ Mobility, cybersecurity and big data must work seamlessly together. This is especially true in the construction industry. According to a recent CompTIA article “IT becomes a mission-critical way to ensure people on the construction site can use and share tasks, data and communication in a timely and secure manner.” In this article, Mike Keemle of […]]]>

Mobility, cybersecurity and big data must work seamlessly together. This is especially true in the construction industry. According to a recent CompTIA article “IT becomes a mission-critical way to ensure people on the construction site can use and share tasks, data and communication in a timely and secure manner.”

In this article, Mike Keemle of OneNeck discusses how the increased use of new and mobile technology is changing how IT is managed. He goes on to say, “Mobile solutions are not just for accessing employee emails and calendars anymore.” He also notes that “smartphones and tablets are being used to run point of sales systems and remotely control security and environmental systems.”

Find out what else Keemle had to say about mobile solutions, including how they have become a game changer, what to consider before implementing one and the future of mobile IT.

]]>
Why Mobile Device Management is No Longer Enough https://www.oneneck.com/blog/mdm-is-no-longer-enough/ Thu, 30 Nov 2017 18:00:00 +0000 https://www.oneneck.com/blog/mdm-is-no-longer-enough/ What a difference a decade makes. Ten years ago, smartphones and tablets were still novel, but today, it’s likely that most (if not all) of your employees own one or several. And they’re using them to accomplish personal and professional tasks at home, in the office and on the road. As the lines between home […]]]>

What a difference a decade makes. Ten years ago, smartphones and tablets were still novel, but today, it’s likely that most (if not all) of your employees own one or several. And they’re using them to accomplish personal and professional tasks at home, in the office and on the road.

As the lines between home and office continue to blur, we’ve accepted that the mobile workforce is here to stay. BYOD (Bring Your Own Device) is widely supported and has also given life to bring your own apps (BYOA), bring your own cloud (BYOC) and bring your own network (BYON).

All of this means that corporate data travels with your employees, no matter where they go, and is shared between devices and networks. But with increasing data mobility comes significant risk to data privacy and security.

How Do Data Privacy Concerns Shape the Future of Enterprise Mobility Management?

When it comes to mobile device management (MDM), it is no longer enough to simply manage devices. The focus has shifted from device management to data protection in response to growth in the number and sophistication of threats exploiting mobile vulnerabilities. However, while securing devices, one can’t lose sight of the need to ensure productivity, so users can get their work done regardless of time, location and choice of device.

Enterprise Mobility Management (EMM) has emerged as an approach to accomplishing these goals. With the right mix of MDM, Mobile Application Management, and security and content management capabilities, EMM is the starting point for managing anything on a mobile platform.

As noted recently by Gartner, EMM suites are the glue that connects mobile devices to their enterprise infrastructure. A comprehensive EMM solution offers your workforce not only security, but the tools and simplicity required to fuel productivity and get access to what’s needed to get work done.

Beyond securing mobile workforces and devices, EMM is vital to future IT and security strategies focused on getting a handle on the interconnectedness driven by the Internet of Things (IoT). As the IoT gives organizations more endpoints to manage on the network, EMM will need to keep pace to reliably secure these new technologies.

What to Look for in Your EMM Solution

In choosing an EMM offering, it is important to seek out a solution that offers strong identity management tools with single sign-on capabilities to optimize the user experience across mobile devices. Enabling fast, simple onboarding is equally valuable to the end user experience. Just a few steps should be all that’s required to populate devices with everything employees need to be productive.

VMware AirWatch is an EMM solution that answers the challenges of both productivity and security and allows your IT team to:

  • Manage all endpoints in a single solution
  • Support full application lifecycle from development to deployment
  • Automate process and deliver intelligent insights
  • Protect corporate apps and data on any network
  • Increase mobile productivity with engaging business applications

For your end users, the AirWatch Content Locker allows them secure access to managed content and features:

  • Aggregated search to simplify access to all corporate content across multiple content repositories and devices
  • A single access point to content across OneDrive, Google Drive, Box, Dropbox and others
  • Simple file sharing that keeps your team in sync with the latest content from your private network
  • Support for various devices (phone, tablet, desktop) and file types

Simplicity, productivity and security. All are key to finding the right EMM solution. The experts at OneNeck can help you assess your mobility risks, security needs and productivity goals, and identify the right EMM solution to meet your unique needs.

AirWatch White Paper
Download the AirWatch white paper to learn more.

]]>
7 Reasons to Outsource IT Project Management Services https://www.oneneck.com/blog/7-reasons-to-outsource-it-project-management-services/ Tue, 14 Nov 2017 19:39:00 +0000 https://www.oneneck.com/blog/7-reasons-to-outsource-it-project-management-services/ Why do so many IT Projects Fail? Given all the complexity of large IT initiatives, CIOs and IT execs can sometimes feel like they’re flirting with disaster. And they’re not alone. According to the 2016 annual Project and Portfolio Management Survey by Innotas, 55 percent of businesses surveyed had experienced an IT project failure within […]]]>

Why do so many IT Projects Fail?
Given all the complexity of large IT initiatives, CIOs and IT execs can sometimes feel like they’re flirting with disaster. And they’re not alone. According to the 2016 annual Project and Portfolio Management Survey by Innotas, 55 percent of businesses surveyed had experienced an IT project failure within the previous 12 months.

With the growing strategic importance of IT, you’re in the spotlight. To reduce risk, you may have invested in high-quality project management tools– only to find that you are still overspending, missing deadlines and facing quality issues. But don’t blame the technology. Consider investing in an expert IT project management provider who can bridge and leverage your two greatest assets: Technology and people.

 Why Outsource IT Project Management

For IT groups with limited internal resources, expertise and bandwidth, outsourcing project management can be the most effective option. The client company is provided with a highly experienced PM to enhance strategy and execution, while internal IT resources remain free to focus on their core competencies.

 An expert PM will:

  1. Get clear project definition by delicately building consensus between competing stakeholder viewpoints and expectations.
  2. Effectively communication with technical and non-technical staff, vendors and suppliers to clarify roles and responsibilities, stay focused on priorities and check on progress.  
  3. Be disciplined in planning, maintaining project integrity, controlling costs, reining in scope, reporting on project status and identifying next steps.
  4. Provide agility and be able to pivot, re-set priorities and stay on target in light of shifting technologies, project requirements, costs and timeframes.  
  5. Make smart use of project management tools to identify potential problems, adjust plans and seek efficiencies.
  6. Handle both known and unanticipated dependencies between IT components, plus deal with bugs, interoperability issues and other problems that crop up.
  7. Help the team get up to speed, shorten the learning curve, set IT up for success and avoid project slowdowns.

Moving Forward

Outsourcing IT project management may be your best bet for reducing risk, meeting project goals and achieving larger business objectives.

 With our highly-experienced experts, OneNeck provides single-source, IT project management services that reduce the risk of your next IT initiative and ensure your project is completed on time, on budget and with exceptional quality. Contact us to find out more.

 

]]>
Professional IT Project Management | Q&A with Darcy Hannapp https://www.oneneck.com/blog/it-project-management-services-secret-sauce/ Thu, 26 Oct 2017 22:46:00 +0000 https://www.oneneck.com/blog/it-project-management-services-secret-sauce/ A large IT project can get complicated quickly.  The PM has to own the project, manage the customer requirements, scope, resources, schedules, budgets, and communicate, communicate, communicate to ensure that we meet the customer business objectives. That’s why it’s essential to have an effective go-to point person on your team—an invaluable project manager (PM). I […]]]>

A large IT project can get complicated quickly.  The PM has to own the project, manage the customer requirements, scope, resources, schedules, budgets, and communicate, communicate, communicate to ensure that we meet the customer business objectives. That’s why it’s essential to have an effective go-to point person on your team—an invaluable project manager (PM).

I recently sat down with Darcy Hannappel, OneNeck Practice Manager in our Project Management Organization, to get her take on the PM’s role in our project success.

Describe for me the PM’s role in a complex IT project.

The PM’s role is really to coordinate everything. We’re the single point of contact from the moment the project comes in the door, it is assigned to a PM, and they’re off and running. They have to pull the resources together. They send a customer intro letter saying, “Hey, I’m your PM on this project – any issues, let me know. I’m going to work with you to set up a kickoff, get our teams assigned and scheduled.” So, they’re it – the single point of contact for that project.

I can imagine that working closely with key stakeholders of a project, customers or OneNeck team members, can get challenging sometimes. How do you ensure that the right level of communication is happening so everyone has opportunity for input, but still keep the project moving forward?

We have a predefined methodology, and there are key performance indicators (KPIs) that our PMs are expected to comply with. For example, once a project is assigned, the PM has 24 hours to respond to the account executive (AE), and  48 hours to reach out to the customer to let them know, “I am your PM.”

As a PM practice manager, we look at KPIs in the system, because the projects are all entered into our tracking tools. So, we know exactly when they kick off the project, I know who the team is, and we know what status meetings are scheduled. Then once the project is kicked off, at a minimum, they have weekly status updates with the entire team – customer and internal.

We have standard templates for notes they’re required to take. We have SharePoint sites that we put all of our collaboration docs in.  We schedule regular status meetings with the customer and team to ensure constant communications

So, we’ve got a pretty good standard methodology that all the PMs comply with. This assures us that every customer is treated the same way and receives all of the information they need. The AMs receive all the information. And our documents look and feel the same, no matter what the customer project.

Is this a process that we, OneNeck, has defined, or is it standard?

Kind of standard, but with we being the hybrid IT provider we are with professional services projects, with managed services projects and with a conglomerate of other projects, we took the standard methodology and we customized it.

Can you give me an example of a project OneNeck has done that a PM played an integral role in its success?

There are so many. But one of the big ones that we just wrapped up was for a major car manufacturer that was moving their headquarters to a different state. We were tasked with building out their entire network in the new location. And we had at one time, upwards of 10 network engineers on the project, so there was a huge coordination effort between the gear that shipped to OneNeck’s warehouse and getting it dispersed as the campus was being constructed, onto each site as it was ready for us to come into it with the gear. So, there was a huge coordination effort with the customer’s engineers, with the construction folks and with our team to build a schedule out. And even with our warehouse personnel to ship the gear at the right time and even have the right people on the ground to install it when it showed up. This was an 18-month project, and it was a huge success.

A large, well-known city is another one. They’re a constant managed services customer that has been with OneNeck for quite a long time, and we’re constantly doing migrations and upgrades for them. We have a program manager, since it is so large, and at least 2 PMs reporting underneath them. They report through custom dashboards to the customer on a weekly basis, sometimes 2x week, depending on what’s going on. These PMs are on board, in touch all the time – a stellar example of great project management. There are so many moving parts and pieces to these managed services customers, not to mention the new projects coming it, and of course giving them the customer service that they expect from us, and so far, we’ve been on the mark.

What crucial skills do you think makes a good PM?

Number 1 – communication. 90% of our job is communicating. You have to be able to communicate in person or virtually thru webex sessions. You have to be able to communicate so precisely, and to blend your communication style with your audience. It could be with a high-level exec at a customer, or maybe one of our very technical engineers that are on the ground doing the nuts-n-bolts work. So, communication skills are an absolute #1.

Number 2 would be the ability to juggle lots of things, and doing it effectively and efficiently – as well as switching gears quickly.

Being technical is really pretty key. We’ve got to have enough knowledge to talk to very technical people at their level. If you don’t, you get the same level of engagement. So, you have to have the aptitude to pick stuff up quickly and put the pieces together real time.

Our PMs are incredible. I don’t know how they manage all of what they do, but they do. They stay on top of all of it.

You’ve been at OneNeck for many years. How have you seen the role of the PM evolve with the industry?

In the old days, we used to install gear, turn up the customer, hand it off to them, and then said, “Let us know when you need an upgrade.” Those days are over. Now the customers come to us and say, “I need you, in your data centers, to run my IT infrastructure.” So, as a PM, we’ve had to blow out our knowledge base and embrace the fact that we’re that customer’s IT department: we’re monitoring, we’re backing up, we’re patching, we’re the middleman for multiple vendors, we’re doing it all.

The amount of technology we deal with is mind blowing. For example, migrating a customer from on-prem to ReliaCloud. Fifteen years ago, I would have said, “How do you spell that?” Today, the brain automatically starts going, okay, you need monitoring, you need data center, you need servers, you need, you need monitoring and network, all of these parts and pieces. It’s evolved into such a hybrid skillset.

So, today’s PM has to interface with every single one of those areas and understand what piece comes first, where the dependencies are, or where there might be a missing link. So, we’re often the ones mid-stream that figure out something’s missing, and thankfully, we have some incredible tools that help us manage these complex projects. But, like anything, the tools are only as good as the people putting the data in them, so the PMs are even more key than the tools.

The engineers rely on us, to build a schedule, to keep the customer informed, to know what needs to be done and to understand dependencies. So, it’s a huge responsibility. But, I’ll tell you what, we’ve got an incredible team of PMs.

And, honestly, it’s exciting. If you are bored with what you’re doing because you’re not learning anything new, then come be a PM. I guarantee you, every day you’re going to learn something new.

Anything in conclusion?

Customer is FIRST. If we don’t have a happy customer, we don’t have a successful project. If that customer isn’t going to look back in the end and say, “Wow, you guys did a great job,” then we fell short somewhere. So, the customer is first and foremost in every PM’s mind.

 

]]>
Reliable Hybrid IT Solutions – Hyperconvergence | OneNeck https://www.oneneck.com/blog/reliable-hybrid-it-solutions-with-hyperconvergence/ Tue, 17 Oct 2017 17:00:00 +0000 https://www.oneneck.com/blog/reliable-hybrid-it-solutions-with-hyperconvergence/ IT departments are under increased pressure to move fast and be able to change direction at a moment’s notice. Aging and complex infrastructures just can’t keep up with the pace that’s needed for power and speed, especially with apps growing 5x faster than IT can deliver. What’s the answer? For many organizations, a hyperconverged infrastructure, […]]]>

IT departments are under increased pressure to move fast and be able to change direction at a moment’s notice. Aging and complex infrastructures just can’t keep up with the pace that’s needed for power and speed, especially with apps growing 5x faster than IT can deliver.

What’s the answer? For many organizations, a hyperconverged infrastructure, which empowers IT through a simple, efficient, software-defined platform that provides the agility and economics of the cloud with the enterprise capabilities of on-premises infrastructure. The notion of top-quality storage, computing, and networking rolled into one compact appliance has become a sensible choice to many aspiring enterprises. And in today’s ultra-competitive business setting, the demand for a hyperconverged solution like the HPE SimpliVity 380, powered by Intel® Xeon® processors, is higher than ever.

The need for a simple, dependable solution is so high that 40 percent of today’s organizations have moved to a hyperconverged infrastructure.1The top motivators for such a move include improved uptime, expanded user flexibility, and greater simplicity for deploying infrastructure.

So, hyperconvergence is here to stay. That’s a no-brainer. But what about the specific solution — the HPE SimpliVity 380? Let’s let the numbers speak for themselves and see real results from customers who have made the move to hyperconverged:

  • 10:1 reduction in data center devices2
  • 40:1 reduction in storage2
  • 73% decrease in TCO2
  • 50% increase in staff productivity3

Other benefits of moving to the HPE SimpliVity 380, powered by Intel® Xeon® processors, include:

  • Reduced lifecycle complexity. While automating multiple tasks, the HPE SimpliVity 380, strengthened by Intel® also performs important updates to VMware ESXi and vCenter servers with minimal clicks.
  • Unmatched operational efficiency. New virtual machines (VMs) can be created in merely one minute with three simple clicks.
  • Enhanced scalability and affordability. Based on an on-demand payment model, the HPE SimpliVity 380, powered by Intel® Xeon® processors, scales from two to 16 nodes while supporting maximum VMs.

How’s the Market Reacting to Hyperconverged?
Check out this podcast as HPE and OneNeck discuss why companies are making the move to hyperconverged and the real results that they have achieved by making that move.

]]>
Speed Up the Server Refresh Cycle | OneNeck IT Solutions https://www.oneneck.com/blog/is-it-time-to-speed-up-the-server-refresh-cycle/ Thu, 12 Oct 2017 17:00:00 +0000 https://www.oneneck.com/blog/is-it-time-to-speed-up-the-server-refresh-cycle/ The conventional wisdom has been that computer hardware should be retained for eight years, if not longer. Yet this eight-year lifecycle is no longer always tenable in today’s fast-changing technology environment. A May 2017 IDC study confirms this, reporting that businesses that upgrade their hardware more frequently save money on operations, earn more revenue and […]]]>

The conventional wisdom has been that computer hardware should be retained for eight years, if not longer. Yet this eight-year lifecycle is no longer always tenable in today’s fast-changing technology environment. A May 2017 IDC study confirms this, reporting that businesses that upgrade their hardware more frequently save money on operations, earn more revenue and are better able to adapt on rapidly shifting technology landscapes.

With this in mind, here are four specific reasons to speed up server replacement cycles:

  1. Old servers require more maintenance resources.
    Although purchasing new servers is expensive, maintaining old servers is even more so. IDC found that server operating costs in years 4-6 of a server’s lifecycle is more than ten times higher than the initial acquisition cost. By purchasing new servers, organizations save 59% over the first three years of operation. Another major advantage of new servers is that the IT team can spend less time on server maintenance. Instead, the team can devote more resources towards projects that contribute to innovation and add real value to the business.
  2. After installing new servers, organizations enjoy performance improvements.
    Old servers aren’t just expensive. They also perform worse the longer they stay around, and maintenance can only do so much to stave off decline. By installing new servers more regularly, businesses enjoy the high performance offered by new hardware.

    This benefit trickles down to end users, who will, in turn, be more productive and experience fewer technological disruptions. IDC estimates that upgrading hardware can reduce unplanned application outages by 78% over the first three years. This has a significant effect, not only on your internal operations, but also your ability to service customers and generate revenue.

  3. New servers enable organizations to better implement next-generation and cloud-native applications.
    Relying on old hardware seriously hinders your organization’s ability to utilize next-generation applications. With the migration to the cloud, more and more applications are becoming “cloud-native.” If your organization utilizes a private cloud, you need a modern infrastructure to support cloud-native applications. By installing cloud-native applications on modern hardware, organizations can enjoy the economic benefits of the public cloud within a private cloud infrastructure.
  4. When supported by an up-to-date hardware infrastructure, businesses gain the agility to remain competitive in today’s marketplace.
    Agility is a must to remain competitive. A modern and agile infrastructure is essential for making real gains, whether that means improving internal operations, developing new products or optimizing the customer experience. New servers offer the agility that is necessary to innovate. With old hardware, you’re just treading water.

    The value of new servers goes beyond cost savings and greater efficiency. By upgrading infrastructure, organizations gain agility. Instead of spending precious time fixing problems with old hardware, the IT team can serve as the impetus for strategic innovations that contribute to business success.

Like It or Not, it’s a World Gone Hybrid.
At OneNeck, along with our top-notch partners like Dell EMC, we have extensive experience helping our customers navigate a hybrid world, a world that contains hardware-based infrastructures and multiple on-premises, public and private clouds. But we can help you to determine what kind of modern infrastructure best suits your business needs now and in the future. So, don’t go it alone.

 

Get Your Free IDC Report Sponsored by EMC

Download a complimentary copy of the 2017 IDC Report sponsored by Dell EMC: Accelerate Business Agility with Faster Server Refresh Cycles

]]>
IT Solutions | Flash Storage – Supercharge Your Performance https://www.oneneck.com/blog/when-milliseconds-matter-supercharge-your-performance-with-flash/ Tue, 19 Sep 2017 17:00:00 +0000 https://www.oneneck.com/blog/when-milliseconds-matter-supercharge-your-performance-with-flash/ Flash storage is critical for businesses that emphasize performance and move quickly. When milliseconds matter, flash just might be the solution to boost your I/O performance and stay ahead of the competition… Flash storage is any device that uses flash memory, such as a USB flash drive or video game. Flash works by storing data […]]]>

Flash storage is critical for businesses that emphasize performance and move quickly. When milliseconds matter, flash just might be the solution to boost your I/O performance and stay ahead of the competition…

Flash storage is any device that uses flash memory, such as a USB flash drive or video game. Flash works by storing data on a charge on a capacitor and is usually packaged in surface-mounted chips fixed to a printed circuit board. As there are no moving parts in a device, this allows flash memory to retain data in the absence of a power supply. Offerings in flash storage now include an all-flash array (AFA), a data storage system that holds multiple flash memory drives.

In many ways, an AFA is an improvement over spinning disk hard drives (HDD). An AFA allows a business to use about 1/11th of the rack space than for HDD. Flash storage cools easier and faster, and it requires less staffing than other methods of storage. The primary reason an AFA uses less energy is it contains no moving parts. The device generates less heat and consumes less power, and it also requires less maintenance. Since it has less potential for spontaneous breakdown, an AFA is less likely to need repair. Finally, flash storage usually takes less than an hour to set up, and is compatible with cloud services.

Understanding the Benefits of Flash Storage
Flash storage provides 12 realized benefits to your bottom line:

  1. Flash storage works well for applications that have intense input/output (I/O) workloads.
  2. Flash storage benefits applications that require lower latency. Latency is the amount of time that it takes for a message to reach a system. Since flash storage runs smoothly, it causes few spikes in latency. Technology has developed a great deal and delays are now down to microseconds.
  3. Flash storage allows data to transfer at faster rates than HDD. With flash, your system can perform real-time analytics. Flash operates three to 10 times faster than HDD. Even with the increase in speed, you should see better performance than that of HDD, including faster startup times and better network performance.

    As an example, without flash, it took one government agency about an hour and a half to run a report. With flash, the agency can run the report in under five minutes. Being able to gain more insight more quickly will help you meet your customers’ needs and show that you can lead in your field.

  1. Flash storage is known for its ability to protect data and maintain a system that functions reliably. It has built-in protections that ensure data will be guarded even if the system fails or needs to be maintained. This makes flash ideal for disaster recovery, while also helping you keep property taxes and insurance costs low.
  2. Flash storage does not require a lot of maintenance, so it frees time for your IT staff to perform other responsibilities.
  3. Flash helps your data remain secure. You can transfer data between cloud services and on-premises technology, and the appliances in your office become a proxy. Using encryption at rest and in transit, you ensure that the information your business utilizes remains safe.
  4. Flash storage is scalable. You can install a single server or an array. It is also cheaper than DRAM and is block addressable. There are many price, performance and cost options for flash.
  5. Flash is compatible with a wide variety of systems, from legacy systems to next-generation digital apps. No matter what type of infrastructure you have, you can automate it with flash.
  6. You can use flash on its own or combine with DRAM, NVDIMM and hard drives. This helps you address the different needs of your business without driving up cost.
  7. Your data center sees a reduction in power, cooling and floor space costs.
  8. Flash storage is very light, so your portable devices are easy to carry. You can also make your server rooms smaller and more manageable.
  9. Unlike HDD, flash operates silently and electronically, so you will see a decrease in noise reduction.

What can flash-based storage do for you?
Are you interested in experiencing some of these benefits firsthand? OneNeck offers cloud-based storage that is 100% flash-based, ready to scale-out as needed for your enterprise storage array. This top-tier storage option delivers not only high levels of performance and scalability, but also brings new levels of ease-of-use to your SAN storage. We have seen customers maximize their performance for consistent low-latency response times and supercharge their databases from milliseconds to microseconds – benefit you can measure!

At OneNeck, we are constantly innovating to offer you options to meet your specific needs. Our cloud-based storage offerings deliver performance based on your unique environment and your business’s demands. Let’s talk about whether or not a flash storage solution could help improve the performance of your environment, enabling you to respond faster to your business demands.

 

LEARN MORE: 10 Considerations When Choosing Flash Storage

]]>
Composable Infrastructure – The Next Big Thing? | OneNeck https://www.oneneck.com/blog/composable-infrastructure-the-next-big-thing-in-the-data-center/ Tue, 22 Aug 2017 17:00:00 +0000 https://www.oneneck.com/blog/composable-infrastructure-the-next-big-thing-in-the-data-center/ The past several years has seen immense evolution in the data center. Long gone are the days of the traditional, siloed infrastructure, where change processes were complex and time consuming. With the entrance of converged infrastructure, and then hyperconverged infrastructure, we’ve witnessed improved productivity and better ability to quickly respond to the needs of the […]]]>

The past several years has seen immense evolution in the data center. Long gone are the days of the traditional, siloed infrastructure, where change processes were complex and time consuming. With the entrance of converged infrastructure, and then hyperconverged infrastructure, we’ve witnessed improved productivity and better ability to quickly respond to the needs of the business. From there, software-defined took the data center to an even more agile place, where storage and compute were software defined, allowing network administrators to centrally control traffic without having to touch a single piece of hardware.

So, what’s next in the data center? The definition of infrastructure through software. While software-defined storage and software-defined networking are great, they only deal with certain parts of the infrastructure. What if we could consider all parts of the infrastructure, together? Enter composable infrastructure.

How Does Composable Infrastructure Work?
It works like this… Instead of carving compute or storage instances out of certain systems, composable infrastructure uses software to discover and pool all data center resources, regardless of their location or underlying hardware. It then categorizes the pools into service tiers, which then deliver compute, storage and network instances as services. This has obvious benefits to IT administrators who can compose infrastructure on-demand from these resource pools to support workload demands. By treating hardware like software, composable infrastructure allows IT to manage infrastructure as code, allocating the right amount of infrastructure pools of resources (compute, storage, memory, I/O) needed to optimize application performance.

Sounds like music to any IT admin’s ears, but is it here or in the distant future? Let’s talk HPE Synergy

HPE Synergy
The composable infrastructure market is new, with HPE leading the charge with the first platform architected from the ground up as composable infrastructure. According to Vikram K, Director, Data Center & Hybrid Cloud, HPE India, “HPE Synergy is a single infrastructure that is designed to reduce operational complexity for traditional workloads and increase operational velocity for the new breed of applications and services. Through a single user interface, this platform enables IT to deliver a new experience by maximizing the speed, agility and efficiency of operations. It is designed to precisely adjust fluid pools of resources, and reduce the cost of over provisioning with a single infrastructure that can run any application.”

HPE says the main customer benefits can be viewed through the composable infrastructure design principles: fluid resource pools, software-defined intelligence and a unified API.

Composable Infrastructure Architectural Design Principals


Composable Infrastructure Architectural Design Principals

Wondering what HPE Synergy looks like? It’s a 10U box, called a frame, which has 12 half-height modular bays, which can also be configured as 6 full-height bays and can be populated with up to 12 servers. Up to 4 storage modules can be added within the frame as well, with up to 40 small form factor (SFF) drives per storage module.

HPE Synergy

Is HPE Synergy Right for Your Organization?
While composable infrastructure is still new, it’s gaining momentum as organizations are starting to see its impact in the data center. Its self-service, flexible approach to application provisioning is enabling IT to operate much like a cloud provider by maximizing the speed, agility and efficiency of core infrastructure and operations.

Moore Insights & Strategy recommends that “IT organizations begin evaluating vendor product roadmaps and consider proof-of-concept deployments for target applications. Over the next 12 to 18 months, the market is expected to dramatically ramp with additional new products, tighter integrations across vendors, and usability enhancements to make deploying and managing composable infrastructure easier for mainstream IT organizations to adopt more widely.”

Ready to Get Composable?
Get started today!

 


Get the eBook

]]>
6 Easy Steps to Smarter Collaboration | OneNeck IT Solutions https://www.oneneck.com/blog/6-steps-to-smarter-collaboration/ Tue, 20 Jun 2017 17:00:00 +0000 https://www.oneneck.com/blog/6-steps-to-smarter-collaboration/ Today’s modern workplace has evolved into a highly-mobile environment fueled by the cloud. Workers are often geographically distributed across around the world, and their expectations are that they want to work in the same connected way that they live, from anywhere, on any device, at any time. In addition, today’s IT leaders are tasked with […]]]>

Today’s modern workplace has evolved into a highly-mobile environment fueled by the cloud. Workers are often geographically distributed across around the world, and their expectations are that they want to work in the same connected way that they live, from anywhere, on any device, at any time.

In addition, today’s IT leaders are tasked with leading the business’ digital transformation, which means delivering a new type of collaborative experience to users, enabling them to work toward the efficient completion of a common goal or business initiative.

However, these modern expectations, while a requirement in today’s digital business, can also be extremely challenging to those businesses with aging collaboration equipment. A combination of high maintenance costs, growth or expansion requirements, and managing separate platforms for audio, video and web conferencing can create even more complexity with a disconnected solution.

While Unified Communications (UC) isn’t new technology, it is now that collaboration advances are truly aligning with the business, making it a critical time for IT to be investing in modern collaboration and UC capabilities.

But if you’re not sure where to start down this road to the modernization of your UC strategy, then let’s get started…

Assess your current UC environment
From the onset, it’s important to assess your current state and determine the gaps in where you want to be down the road. Be sure to include those line of business owners (LOBs) like HR, finance and sales, as their input will be key in the future success of your UC strategy.

Choose the right partner and vendor
Choosing the right partner and vendor to bring your UC strategy to fruition is critical. You need a partner and vendor that will offer you the flexibility in deployment models you require to fit your unique environment. You also want a partner that can deliver a complete end-to-end solution with best-of-breed technologies throughout, built on an infrastructure capable of supporting a reliable communications solution. Working with a partner and vendor that understand the underlying networking challenges can save your organization a lot of grief and heartache down the road.

Prepare your internal proposal.
Here’s where that up-front assessment and partner/vendor selection come into play. Develop a comprehensive plan, including solution details, timelines and budgeting numbers. Align the proposal with the needs of the customer base, the workforce, and the short-term and long-range business goals.

Gain internal support.
It’s key to have the support and buy-in from the internal stakeholders. The key here is to lay out the variables for them and demonstrate that it is riskier to stand still than it is to move forward with a comprehensive UC strategy. With the right strategy in place, the impact on the business is clear – increased productivity and improved customer service. This enables to business to:

  • Modernize the workplace and meet the needs of their modern workforce, directly impacting employee recruitment and retention.
  • Expand the organization’s talent by supporting mobility, collaboration, device ubiquity and geographic diversity.
  • Reduce total cost of ownership through reduced facility and travel costs.

To successfully implement a UC solution that will scale with your business, you need a partner that has extensive experience in designing, deploying, troubleshooting and validating your UC solution, and beyond that, training your users to maximize the solution.

Measure and validate results.
Here’s where you loop back around and work with your LOB owners to ensure that your UC solution is meeting their needs, as well as continually fine tuning for enhancement and growth.

In Summary

Companies that don’t keep pace by clinging to outdated technologies will find themselves behind the pack, exposing their organization to unnecessary costs and risks. Don’t let technology hold you back. Make the decision to modernize your UC, and your customers and employees will thank you.

]]>
Breaking Down Micro-segmentation | IT Security Solutions https://www.oneneck.com/blog/breaking-down-micro-segmentation/ Thu, 25 May 2017 17:00:00 +0000 https://www.oneneck.com/blog/breaking-down-micro-segmentation/ IT Security Solutions The rise of the data center means that we are storing more financial, medical and sensitive corporate data off-premises. Yet many data centers are still basing security on the old model of securing the perimeter — a method that is no longer effective on its own in today’s evolving threat landscape. Attacks […]]]>

IT Security Solutions

The rise of the data center means that we are storing more financial, medical and sensitive corporate data off-premises. Yet many data centers are still basing security on the old model of securing the perimeter — a method that is no longer effective on its own in today’s evolving threat landscape.

Attacks are increasing in their frequency and sophistication, and new approaches are needed to protect sensitive assets. A new approach that’s growing in popularity is micro-segmentation. This growing security trend enables fine-grained security policies to be applied to data center applications and individual workloads through virtualization. This software-only approach allows you to protect your data and applications if the hardware-based firewall is breached.

How Micro-segmentation Security Works

Micro-segmentation protects sensitive data by restricting an attacker’s ability to move laterally within your data center infrastructure.

Micro-segmentation uses a virtualized, software-only model to assign security policies to individual workloads, allowing those policies to synchronize with the virtual network, virtual machines and operating systems. Policies can move with the virtual machine or workload in the case of migration or network reconfiguration.

The idea of ‘zero trust’ coined by Forrester Research is central to the concept of micro-segmentation and means that only the necessary actions and connections are enabled in a workload or application, and all else is blocked. Until now, micro-segmentation has been possible, but it’s been resource and cost-prohibitive, but network virtualization has enabled micro‐segmentation to become a reality in the software‐defined data center.

The Benefits of Micro-segmentation

Micro-segmentation holds key advantages for IT security and business operations, including:

  • Minimizing the risk and impact of breaches: If the data center perimeter is breached, micro-segmentation will block the bad actors’ lateral movements, stopping them from reaching other servers and consequently reducing the attack surface.
  • Reducing capital investment and operating expenses: Capital expenditures are decreased because there’s no need to purchase more physical firewalls for each workload. Additionally, there’s a dramatic reduction in manual tasks, resulting in lower operating costs.
  • Automating security workflows: Workloads can be grouped intelligently based on their attributes. Automated security workflows enable rapid response to threats that are adaptable, dramatically increasing the odds against adversaries that use sophisticated tactics designed to circumvent countermeasures.
  • Creating a single pane of glass: IT administrators can manage thousands of firewalls as one — automating workflows and policies and propagating configuration changes.
  • Automating IT service delivery: Provisioning for cloud and traditional applications takes place in seconds, not days or weeks.
  • Leveraging existing infrastructure: Virtual networks can coexist on physical ones. The IT team can virtualize a portion of the network by simply adding hypervisor nodes; meaning organizations can leverage their existing physical security infrastructure and deploy micro-segmentation based on their particular needs.

Micro-segmentation and network virtualization are boosting security by providing persistent security that moves with an application or workload even if the network is reconfigured. Organizations who have adopted micro-segmentation are experiencing enhanced innovation to improve the customer experience and achieve a competitive advantage by creating secure virtual networks.

At OneNeck, we have extensive experience with data center micro-segmentation and helping our customers navigate complex data center solutions. Our in-house teams of experts are ready to help you determine if micro-segmentation is a fit for your organization to defend against today’s complex threats.

]]>
Congratulations You’ve been Spear-Phished |April Fools! https://www.oneneck.com/blog/2017/3/ Fri, 31 Mar 2017 17:00:00 +0000 https://www.oneneck.com/blog/2017-3/ April 1st is a day marked by good humored pranks, but the internet has changed that significantly; when you get pranked by a cybercriminal, your internet security is compromised, and that’s no joking manner. Phishing 101 Phishing attacks are one of the most common cybersecurity challenges that organizations face in keeping their information secure, and […]]]>

April 1st is a day marked by good humored pranks, but the internet has changed that significantly; when you get pranked by a cybercriminal, your internet security is compromised, and that’s no joking manner.

Phishing 101

Phishing attacks are one of the most common cybersecurity challenges that organizations face in keeping their information secure, and the latest twist is called spear phishing. Spear phishing, a more targeted form of phishing in which the email appears to come from someone the recipient knows and trusts like a friend, co-worker or even their employer’s human resources department). Before crafting the message, the attacker will research the victim’s social media profiles on LinkedIn, Twitter and Facebook, collecting data to try to build a profile on the victim’s life, work and interests, all in an attempt to acquire sensitive information or credentials.

According to the Symantec Internet Security Threat 2016 report, spear-phishing campaigns targeting employees increased 55% in 2015 and is expected to continue to increase.

In addition, the report indicates that in 2015, the number of campaigns increased, while the number of attacks and the number of recipients within each campaign continued to fall. With the length of time shortening, it’s clear that these types of attacks are becoming stealthier.

Stealthier? How? It’s simple, spear-phishing attacks are less likely to arouse suspicion within an organization as the campaigns are smaller, shorter and target fewer recipients.

The end result of a successful spear-phishing attack can be substantial revenue loss or damaged reputation.

Thwarting Attacks

Cybersecurity has become a leading issue in business today. Threats, both internal and external, have the potential to shut your business down if you’re not prepared and updating your security systems and processes on a regular basis.

While there is no cure-all that will put an end to spear-phishing attacks, IT security professionals can focus on shrinking the attack surface. A comprehensive security approach is crucial including network, application, enterprise mobility, and identity and secure access solutions to protect your organization from the internal and external threat landscape.

Once the technical components are in place, it’s imperative to create awareness amongst your employees in regards to spear phishing and the damages these attacks can cause. By providing employees with exactly what to look out for, companies can greatly minimize their risk of falling victim.

OneNeck, a Trusted IT Security Services Provider

At OneNeck, our security services address the broad scope of security and compliance needs that businesses face. We have a depth of experience in assisting our customers with their security needs, and our team is made up of security experts who stay current on the emerging threats so you don’t have to. Contact us today to talk about your security strategy and avoid getting fooled by outside threats.

]]>
Is Your Workplace Technology Millennial Friendly? https://www.oneneck.com/blog/2017/2/ Tue, 07 Feb 2017 18:00:00 +0000 https://www.oneneck.com/blog/2017-2/ The advances in the technology of every generation have disrupted the status quo in the workplace. In the 1950s, a generation shaped by World War II who was trained to accept top-down leadership and had their workplace ruled by a 9-to-5 day, experienced change through a mainframe computer the size of a room. Baby Boomers, […]]]>

The advances in the technology of every generation have disrupted the status quo in the workplace. In the 1950s, a generation shaped by World War II who was trained to accept top-down leadership and had their workplace ruled by a 9-to-5 day, experienced change through a mainframe computer the size of a room. Baby Boomers, taught to question authority, came up with different rules for leadership and experienced workplace disruption from the electronic calculator, and later the personal computer. For GenX, it was the Internet, email, online services and video conferencing. But no generation has grown up on technology like the Millennials, and these digital natives are disrupting the way we work in many ways.

The Millennial world is a digital world. Making up approximately one-third of today’s workforce, there is no doubt that Millennials are having an effect on workplace technology. Organizations need to engage with this growing group to successfully navigate the digital transformation. But while transforming the way we do business in an increasingly digital age, we cannot lose sight of how these changes affect our critical processes, privacy and security.

How millennials are impacting technology in the workplace:

  • Mobility: Millennials are a mobile generation – they work at all times of the day and from anywhere. Mobility is so important to them that “one in three Millennials said would prioritize device flexibility, social media freedom and work mobility over salary in accepting a job offer.” (Source: Kenan-Flagler Business School.) Gartner sees this disruption and estimates that “70% of enterprises see providing more mobile support to employees over the next 12 months as a high or critical priority.”
  • Cloud: According to IDC, Millennial-led mid-market firms are more likely to embrace a cloud-first strategy and are adopting Software as a Service (SaaS) applications at a 20% higher rate as compared to other companies.
  • Wearables and the Internet of Things (IoT): Millennials are embracing the coming tide of connected everything from wearables to smart machines that are disrupting traditional business processes like support and supply chain management. These connected devices are exponentially increasing the amount of data collected every day.
  • Collaboration: Millennials are in constant communication and need immediate feedback. They use text, snapchat, Instagram, Twitter and more to hold whole conversations within social apps and reshape how organizations analyze and distribute information. If this group does not feel that tools meet their needs, they will seek out and use alternative tools that can create chaos for the security of your data and infrastructure. Forward thinking companies will consult with this group to learn to communicate and collaborate using apps like Cisco Spark to increase overall productivity.
  • More to come: Millennial-friendly workplace technologies such as cloud, mobile, big data and social media are the new ways of communicating and collaborating, and they will be at the forefront of adoption. Millennials will actively seek technology solutions that save costs, improve processes and increase productivity. The downside is that there may be too much adoption and not enough vetting. The effect these new solutions will have on the workplace needs to be analyzed to prevent too much fragmentation, which could ultimately end up decreasing productivity.

At OneNeck, we embrace the change brought about by the Millennial workforce and can provide the guidance you need to effectively adopt and integrate digital-driven change in your organization that propels your business forward. Don’t let workplace technology overwhelm you – embrace it – and your Millennial workforce will thank you!

]]>
Healthcare Has an Identity Problem https://www.oneneck.com/blog/healthcare-has-an-identity-problem/ Tue, 17 Jan 2017 18:00:00 +0000 https://www.oneneck.com/blog/healthcare-has-an-identity-problem/ There are now three things we can count on in life: death, taxes, and the fact that your healthcare organization could very likely experience a breach. As healthcare records are increasingly digitized, there are new opportunities for improving patient care — but there is also risk. According to a recent report from the Ponemon Institute, […]]]>

There are now three things we can count on in life: death, taxes, and the fact that your healthcare organization could very likely experience a breach. As healthcare records are increasingly digitized, there are new opportunities for improving patient care — but there is also risk. According to a recent report from the Ponemon Institute, 90% of all healthcare organizations have suffered at least one data breach during the past two years. The total costs of these breaches? 6.2 billion.

The average employee uses up to three devices on a daily basis and healthcare providers expect that they will be able to seamlessly transition between devices. Access to data and systems must be fast to reliably provide real-time answers and run complex medical applications.

Mobility, digitization and the Internet of Things (IoT) can significantly enhance a practitioner’s ability to improve communication with patients. But this anywhere, anytime access and sharing creates a security problem when we think about tracking the identities behind these devices. Healthcare organizations must take proactive steps to protect data from the wrong hands and improve cybersecurity measures.

Healthcare IT organizations faces many challenges including:

  • Allowing access to a variety of different users with different access levels
  • Securing both personal and provider-owned mobile devices
  • Enforcing policies to protect patient data
  • Improving productivity by delivering a great user experience
  • Supporting real-time, immediate access to medical applications including voice and video
  • Keeping operational costs low and simplifying IT management

How Cisco Identity Services Engine (ISE) Can Help

In order to maintain security, every person interacting with the provider network needs a digital identity that is authenticated in real-time so that any unusual activity is flagged immediately. Cisco ISE enforces security policies and contains threats and protects sensitive patient data. ISE includes the following features:

  1. Centralized control over the level of user access based on business role to provide a consistent network access policy for end users no matter if they connect through a wired or wireless network or VPN. This means that a medical technician is granted different access privileges than a physician, which is different than a hospital administrator.
  2. Reduce the number of unknown endpoints with profiling and device profile feed service. IT teams gain greater visibility and more accurate identification of all devices connected to a network.
  3. Dynamic visual workflows means that you can easily simplify and manage the guest experience for easier onboarding and administration through fully customizable branded mobile and desktop guest portals.
  4. Out-of-the-box setup, self-service device onboarding and management, internal device certificate management, and integrated enterprise mobility management (EMM) partner software speeds BYOD and enterprise mobility.
  5. Construct software-defined segmentation policies to contain network threats and dynamically segment access without the complexity of multiple VLANs or the need to redesign the network.

Many healthcare providers have already benefitted from Cisco ISE, including Banner Health, a healthcare provider that runs 29 hospitals in Arizona and six other states and employs more than 47,000 people. Banner Health was endeavoring to provide modern healthcare to patients while also securing their patient privacy and data. As a result of partnering with OneNeck IT Solutions, Banner Health now has comprehensive network security policies for both corporate and non-corporate owned devices. Cisco Systems Identity Services Engine (ISE) was an integral part of the OneNeck solution and included licensing for 30,000 devices.

]]>
Compliance Challenges | Strategic IT Security Planning https://www.oneneck.com/blog/strategic-it-planning-addressing-compliance-challenges-in-a-non-compliant-world/ Tue, 27 Sep 2016 17:00:00 +0000 https://www.oneneck.com/blog/strategic-it-planning-addressing-compliance-challenges-in-a-non-compliant-world/ The scale, scope and complexity of regulatory compliance rules continue to increase – for good reason. Compliance mandates exist to hold companies accountable, mitigate risk and protect employees and customers from fraud, unfair practices and increasingly against malicious cyber activity. Today’s threat landscape has pushed compliance mandates to new levels, making it more difficult to […]]]>

The scale, scope and complexity of regulatory compliance rules continue to increase – for good reason. Compliance mandates exist to hold companies accountable, mitigate risk and protect employees and customers from fraud, unfair practices and increasingly against malicious cyber activity. Today’s threat landscape has pushed compliance mandates to new levels, making it more difficult to ensure compliance. In order for you to meet audit and certification requirements, it is important to leverage best-practice frameworks like ITIL and NIST, and methodologies need to be put in place to ensure that when (not if) your organization is audited you will pass or face stiff penalties.

PCI-DSS 

When there is a breach or theft of credit card data, customers lose trust and confidence in the merchant, and financial institution involved can be subject to numerous financial liabilities. Small businesses are particularly vulnerable. In 2105 alone, Symantec reported that close to half of cyber-attacks worldwide were against small business with less than 250 employees. Larger credit card breach examples include Target, Walmart, Home Depot and JPMorgan Chase. E-commerce sites are also a frequent target.

The Payment Card Industry Data Security Standard (PCI-DSS) is a set of requirements that must be followed by anyone who processes payments by credit card to keep that payment information secure. Point-of-sale systems, online shopping carts, and wireless access routers are all covered in the regulations.  New guidelines were published in 2016 to keep up with new threats, and it is important to make sure that you have made the necessary updates to in order to remain compliant.

PCI DSS includes the following six objectives that companies are required to abide by:

  • Protect cardholder data.
  • Maintain an information security policy.
  • Build and maintain a secure network.
  • Implement strong access control mechanisms.
  • Maintain a vulnerability management program.
  • Regularly monitor and test networks.

HIPAA

For companies in the healthcare sector, Health Insurance Portability and Accountability Act (HIPAA) standards are in place to secure protected health information (PHI) and to protect a patient’s personally identifiable information (PII). In 2015, healthcare was the top target for breaches due to the value of patient data, and as a result, the Office of Civil Rights has been conducting random HIPAA audits. The top four HIPAA rules that must be met include:

  • Ensure the company’s employees practice compliance.
  • Protect against inappropriate information disclosures.
  • Identify likely security threats, and establish protected measures against those.
  • Ensure that all electronic PHI that is created or stored remain confidential.

HIPPA implementations include requirements such as encryption, strong passwords and multi-authentication systems, and a lapse in a single implementation can compromise an organization’s entire security posture.

Three Critical Compliance Mistakes

While compliance mandates may vary from one industry to another, there are common mistakes that appear across all sectors.

  • Mistake 1: Little to No Extensive Evaluation of IT Solution Providers
    Carefully consider the qualifications of your IT solutions provider, as a breach at their facility can affect your audit and leave you subject to non-compliance and regulatory fines. You will need to validate your third-party IT service provider’s certifications and investigate their record relating to the compliance standards that you are required to comply with. Ask questions to understand what policies and procedures your provider has in place to mitigate threats, their disaster recovery plan and how they will respond to any incidents.
  • Mistake 2: Neglecting Physical Security
    Physical security is often an afterthought but it is a critical requirement for staying in compliance and preventing unauthorized access to your network. If the facility does not have policies and procedures in place, it can mean unauthorized access to your organization’s data that could be carried out of a facility on a laptop or flash drive. Policies should cover who has access to locked server racks, suites and cages, who is authorized and how much access they are granted. Is two-factor authentication required for building access?  Is there 24/7 monitoring, and security cameras and alarm systems in place?  
  • Mistake 3: Not Routinely Monitoring Security and Compliance Processes
    Compliance mandates are constantly changing to keep up with current economic and threat landscape. Being in compliance is one thing – staying in compliance is a whole other being. Your service provider must provide you with guidelines that they follow to continually evaluate their facility, procedures and infrastructure. Many security breaches are due in part to organizations that are too lax with their administrative rights and privileges, as well as the result of human negligence. Ask questions to find out how often policies are updated, how new threats are responded to and how often employees are evaluated.

Trusting Your IT Service Provider

It is imperative that your IT service provider is a true partner that you can trust to maintain your security and meet your compliance requirements. OneNeck IT Solutions leverages best practices frameworks, like ITIL and NIST, to ensure your compliance needs are met.  We leverage these methodologies to document, train and govern our services effectively, allowing us to address our customer’s compliance needs and auditor reviews, giving you the peace of mind you need to ensure your compliant and secure…

]]>
Strategic IT Decisions | OneNeck Custom IT Solutions https://www.oneneck.com/blog/digital-transformation/is-technology-or-strategy-dictating-your-it-decisions/ Thu, 18 Aug 2016 17:00:00 +0000 https://www.oneneck.com/blog/digital-transformation-is-technology-or-strategy-dictating-your-it-decisions/ Does technology drive digital transformation? Surprisingly, the answer is no. Technology informs corporate strategy, but it’s the business strategy that needs to be in the driver’s seat for digital transformation to be truly successful. However, many businesses put the cart before the horse, allowing technology to dictate digital decisions. A recent study by MIT Sloan […]]]>

Does technology drive digital transformation? Surprisingly, the answer is no. Technology informs corporate strategy, but it’s the business strategy that needs to be in the driver’s seat for digital transformation to be truly successful. However, many businesses put the cart before the horse, allowing technology to dictate digital decisions.

A recent study by MIT Sloan Management Review and Deloitte sheds light on what differentiates companies who utilize strategy-driven, not technology-driven innovation. Digitally mature enterprises use new technologies to transform business operations on a large scale. In contrast, less mature businesses use technology more like a Band-Aid. According to a survey of over 4,800 business executives, only 29% of businesses are in the maturing stage.

The study’s authors identify key characteristics of digitally maturing businesses:

  • There are large-scale objectives in place. The goal isn’t simply to move to the cloud or digitize records, but rather to focus on the big picture. Digital transformation should align with the organization’s business goals first, and then new technologies can facilitate those objectives.
  • Employees receive regular training on how to incorporate new technologies into their workflow. The Deloitte study shows that employees want to work for companies that are digital leaders and incorporate that mission into a culture that strives for constant improvement.
  • Risk-taking becomes the norm. Digitally maturing companies encourage managers and employees to take risks in the pursuit of excellence. These companies encourage employees to think big and act boldly when adopting new technologies to meet business goals.
  • Corporate leadership drives digital strategy. Companies that are digitally mature also empower individuals or groups to make decisions that influence the entire organization’s digital strategy. These leaders may not be IT experts, but they are able to understand the big picture of digital transformation: why digital transformation is necessary and how it can accelerate business growth.

Technology and strategy are inextricably linked. The respondents in the Deloitte study cited the top 5 impediments to adopting digital trends as: 

  • too many competing priorities (43%),
  • a lack of an overall strategy (33%),
  • security concerns (25%),
  • insufficient technical skills (25%)
  • and a lack of organizational agility (24%).

Organizations need to evaluate their own digital maturity and determine how new digital initiatives can influence their growth. Does your organization value business strategy that is tied to your technical initiatives or are you allowing technology in the driver’s seat?

 

In this fast paced technology landscape, change is a constant, and we can’t afford to lose focus on our business strategy. To develop a clear and coherent digital strategy, an outside perspective can be invaluable. OneNeck IT Solutions’ team of experts can evaluate where you are right now—and help you to refine your digital transformation strategy that incorporates the right elements for a successful business future.

 



on-page-ad-jeff-budge

]]>
Putting SDN to Work | Application Centric Infrastructure https://www.oneneck.com/blog/putting-sdn-to-work-with-an-application-centric-infrastructure/ Tue, 19 Apr 2016 17:00:00 +0000 https://www.oneneck.com/blog/putting-sdn-to-work-with-an-application-centric-infrastructure/ Putting SDN to Work with an Application Centric Infrastructure In a recent blog post we outlined some of benefits and challenges of adopting Software Defined Networking (SDN), including the need to abstract network components to enable a new kind of network infrastructure. To give you a better understanding of what it takes to implement SDN, […]]]>

Putting SDN to Work with an Application Centric Infrastructure

In a recent blog post we outlined some of benefits and challenges of adopting Software Defined Networking (SDN), including the need to abstract network components to enable a new kind of network infrastructure. To give you a better understanding of what it takes to implement SDN, we wanted to delve a little deeper into how the hardware and software work together to enable software-defined integration, so we sat down with Brian Dooley, OneNeck Data Center Architect and CCIE, and an expert in ACI and numerous other networking technologies.

As an expert with years of experience in networking technologies, how do you view the SDN landscape and current adoption?

The IT world seems to want to embrace SDN just to keep pace with the industry, or at least become more educated about it, and
Software Defined Networking
is being presented in a number of formats, by just as many vendors. Some solutions suggest SDN is merely moving switching and routing into servers and off hardware. Whether you move your infrastructure into servers to support a virtualized solution or deploy a new hardware solution to move towards SDN, there are points of commonality across each environment.

The real goal of SDN is to get traditional

network

engineers to begin thinking like software developers. If you have never studied software development this may seem a bit daunting to begin with. There is no expectation for these engineers to become developers themselves. While you are welcome to review the

SWEBOK

, your time would be better spent reviewing how systems communicate with each other in your existing data center. The primary idea is to streamline the development to operations transition so networking is not seen as a bottleneck to deployment.

So, how does SDN compare to traditional networking environments?

Traditional existing environments are segmented by VLANs. A single subnet of IP addresses is mapped to a single VLAN. VLAN “A” may or may not communicate with VLAN “B”, yet there is not necessarily consistency among all entities in how they are treated with regard to this policy. While this segmentation has worked for us for years, the expectations today are speeding up implementation and to adjust the segmentation to match the solutions being developed. Historically the applications are deployed by fitting them into the infrastructure the network team dictates. Existing constructs have become overloaded, making them less agile and unwieldy for the needs of developers.

In an effort to get away from these constructs the new approach is to treat individual applications as a whole unique entity. To accomplish this, we must forget how we perform networking today, and consider instead how an application communicates with its various components. A typical three tier application may be made up of web, application, and database. Each of these parts can be seen as separate components that require unique treatment. If we break out Web and identify what aspects about it make it unique or different, we can then identify other applications that have components that have the same requirements. By grouping these resources that require the same policy treatment we are beginning to look at an application through a developer’s eyes. So if there are separate containers for each of the three components (Web/App/DB) we can then connect them to each other in a controlled, deterministic manner.

So, let’s talk specifically about Cisco’s Application Centric Infrastructure. What does it entail?

Cisco is providing us with a hardware-based solution for network infrastructure that changes the way we do networking. ACI operates as an overlay on a Clos Fabric (see

Charles Clos

) solution that administers a network through policy. By pushing policy through an API to the Fabric to deploy infrastructure, you can optimize and automate the processes of deployment.

ACI uses Nexus 9000 series switching hardware. This hardware is broken up into Spine switches and Leaf switches. As part of an ACI Fabric, an engineer does not directly configure these switches. The Fabric is configured, administered, and controlled by the Application Policy Infrastructure Controller (APIC) which runs on “specialized” Cisco C-series servers. The APIC is actually a cluster of three servers that share state information of the Fabric for high-availability. The APIC is needed for changes to the Fabric, but it does not partake in forwarding of traffic. The Fabric can continue to operate in the absence of the controller, (i.e. APIC failure) but no changes would be possible without it.

An ACI Fabric is cabled up with each Leaf connecting to every Spine. Then the APIC(s) is connected to a Leaf (or multiple Leafs). After a base provision of the APIC through console connectivity a browser is used to connect to the APIC API. Then the APIC is used to automatically discover the Fabric hardware through LLDP. Once the Fabric discovery is complete we move on to developing policies to support the applications and their respective connectivity requirements.

There are other aspects of ACI that I won’t go into now. Service Graphs, VMM integration, Layer 2 and Layer 3 external integration all require separate conversations due to the depth needed for each subject.

Well, let’s talk policy. Policies are a huge part of what makes ACI work. Can you explain how?

The ACI environment is built entirely on layers of
policies
. Creation of policies allows for the reuse of them when building solutions for application owners. There are many types of policies used in building an ACI application solution. The primary goal is to produce what is known as an Application Network Profile (ANP).

  • Application Network Profiles are a group of EPGs and the policies that define the communication between them.
  • End Point Groups (EPG) are containers for objects (End Points) requiring the same policy treatment, i.e. app tiers, or services.
  • End Points are entities that connect (attach) to the fabric directly or indirectly.
  • Tenants are a Logical separator for: Customer, BU, group etc. Separates traffic, admin, visibility, etc.
  • Private-Network (L3) – Equivalent to a VRF, separates routing instances, can be used as an admin separation.
  • Bridge Domain (BD) – NOT A VLAN, simply a container for subnets. CAN be used to define L2 boundary
  • Contracts represent policies between EPGs. Contracts are “provided” by one EPG and “consumed” by another.
  • Fabric Policies configure interfaces that connect spine and leaf switches.
  • Access Policies configure external-facing interfaces that do not connect to a spine switch. Access Policy components include: Switch Profile, Interface Profile, Interface Policy Group and Interface Policies.

Combine all of the above pieces, along with an EPG, and a VLAN Pool and associate them with a Domain.

VLAN Pools are a list of VLANs. In ACI the VLAN tag is used purely for classification purposes

A domain is a VLAN namespace (a valid VLAN range), can be: physical, bridged external, routed external or virtual (VMM).

An Attachable Entity Profile (AEP) represents a group of external entities with similar infrastructure policy requirements. AEPs are required to enable VLANs on the leaf. Without VLAN pool deployment using AEP, a VLAN is not enabled on the leaf port, even if an EPG is provisioned.

Once an EPG has been associated with switches and their respective interfaces using the appropriate policies, the devices connected to those interfaces may operate within the EPG.

So, that’s a lot of information, but once all of that is defined, what do you do with it to put it to work?

Once the Fabric is up and running the first step is to build a Tenant. Within a Tenant we define a Private Network, which effectively creates a separate Layer 3 network environment in the Fabric.

Within the Private Network we will define Bridge Domains to later be used in defining EPGs. Using the earlier example of a multi-tier application, we create an EPG each with unique Bridge Domains, for Users (External), Web, App, and DB. In order for these EPGs to communicate with one another, we define Contracts and Provider/Consumer relationships between each EPG. An EPG that has the resources is the Provider. The EPG that requires the resource is considered the consumer.

Within a Contract, specific information is defined known as filters, which can restrict access between EPGs to specific TCP port (Layer 4) information. If further Layer 4 services are needed they can be integrated into the Contracts through Service Graphs. Service Graphs provide connectivity to external resources such as firewalls and server load balancers.

So, how does one go about connecting the Fabric into the virtual network?

In order to administer virtual environments through policy, there must be a way to tie a virtual network into the Fabric. From a VMware virtual network perspective, we are familiar associating VMs with individual portgroups to establish connectivity to the physical network. When integrating with ACI, this does not change from the virtual perspective. We still place VMs in specific portgroups. The difference is where the portgroups come from. Once an ACI Fabric ties into vCenter, a vDS is created dynamically within vCenter. EPGs are then pushed to the vDS which become portgroups within the vDS. This allows for control of virtual networking through policy.

Wow – sounds like a powerful way of taking networking to the next level. Thank you for the insight into how ACI works.

Of course. If there’s anything to take away from all of this technical speak, we can see firsthand with our customers how ACI delivers a new way of

networking

that’s faster, more simple and brings a new level of security to the data center. It’s really powerful and here to stay.

]]>
Benefits & Challenges of SDN Deployment https://www.oneneck.com/blog/sdn-sounds-great-but-now-what/ Tue, 01 Mar 2016 18:00:00 +0000 https://www.oneneck.com/blog/sdn-sounds-great-but-now-what/ Software-defined networking (SDN) is starting to make inroads and is projected to grow from $103 million in 2014 to $5.7 billion in 2019 with the adoption rate among enterprises expected to grow from 6% to 23%, according to research firm IHS. At the same time, IHS predicts service providers around the world will increase their […]]]>

Software-defined networking (SDN) is starting to make inroads and is projected to grow from $103 million in 2014 to $5.7 billion in 2019 with the adoption rate among enterprises expected to grow from 6% to 23%, according to research firm IHS. At the same time, IHS predicts service providers around the world will increase their spending on SDN software by 15 times from 2015 to 2019.

Software-defined WANs or SD-WANs are likely to get the highest return from SDN development. Because they can use cheaper broadband connections, SD-WANs are poised to cut the costs of transporting data by shifting traffic away from expensive MPLS connections. Organizations wishing to take advantage of SDN generally do not have expertise in-house to successfully leverage the technology and will require help from an experienced third-party.

The Benefits and Challenges of SDN Deployment

Through SDN deployment, organizations can build networks with greater agility, improved performance, and reduced system overhead – all of which can be used to build technical advantage within competitive markets. With load balancing, on-demand provisioning, and advanced scalability, SDN solutions make it significantly easier and faster to deliver new applications and services for an organization and control every aspect of their network infrastructure.

As SDN represents a radically different approach from traditional, closed network systems, it does require some expertise and knowledge to deploy. Organizations may be working with a legacy network infrastructure or simply have an overly complex system comprised of disparate, loosely integrated parts. Different vendors have taken different approaches to SDN mainly, hardware-centric vs. virtualization through all-software network overlay technology.  Choosing between the approaches will come down to the comfortability of the group in charge of the decision, boiling it down to network engineers vs. systems engineers.

To move forward, organizations will need to make a clear business case for the move to SDN and also understand the unknown.  The ability to automate and ultimately move towards an entire software-defined infrastructure, including software-defined storage and SD-WAN, is compelling. As adoption grows, questions revolving around best practices, adoption, performance and security will be played out in greater numbers and those with the most experience deploying the technology will play a great role in the future development.

The Need for Specialized Third-Party Services

No matter the route you choose, an SDN network solution requires the integration of multiple types of network technology and demands that network architects and administrators have a deep understanding of the technologies involved as well as the organization’s needs.

Those who wish to fully leverage the power of SDN architecture will need to work with an integrator well-versed in SDN technology and modern network infrastructure. OneNeck IT Solutions is committed to developing the best in streamlined and hybrid technology for both mid-market and enterprise companies. At OneNeck IT Solutions, the emphasis is not on just building a network but also on building customer relationships to guide you through the right decisions for your organization.

]]>
Is an Enterprise Mobility Strategy Necessary | OneNeck https://www.oneneck.com/blog/is-an-enterprise-mobility-strategy-really-necessary/ Fri, 12 Feb 2016 18:00:00 +0000 https://www.oneneck.com/blog/is-an-enterprise-mobility-strategy-really-necessary/ Enterprise Mobility Management (EMM) isn’t just about giving employees mobile access to their company email, contacts and calendar anymore. As more and more mission-critical apps and critical data are pushed to users’ mobile devices, it is now necessary for organizations to offer more sophisticated tools and tracking to assure ongoing productivity and security. The ability […]]]>

Enterprise Mobility Management (EMM) isn’t just about giving employees mobile access to their company email, contacts and calendar anymore. As more and more mission-critical apps and critical data are pushed to users’ mobile devices, it is now necessary for organizations to offer more sophisticated tools and tracking to assure ongoing productivity and security. The ability to provide managed access, deliver and manage mobile apps, and to support connectivity to tablets and smartphones are now basic requirements of today’s EMM strategy.

 

Making BYOD Work at Work

Between 20-40% of employees in businesses today have been a given a mobile device for business-use; and another 60-80% are using their own personal devices. So many mobile devices, from so many varied sources, create a lot questions and potential risk when it comes to support, security and management.

Some of the specific challenges that businesses are facing today in a corporate, Bring Your Own Device (BYOD) environment include:

  • Device limitations and differences — namely in operating system and hardware differences.
  • Complexity and consistency of version updates i.e. various service, billing and liability issues.
  • Security and privacy concerns.

BYOD Security and Usage Concerns

Security concerns will persist in virtually every connected work environment, even if BYOD is not a factor. With the rapid growth of personal and corporate mobile devices, businesses must find a way to address possible security concerns across their users.

Some key questions that can be asked to help address mobile security risks include:

  • How is corporate data shared, accessed and processed across mobile devices — is data being shared across multiple channels?
  • Is company-sensitive data being encrypted on all mediums?
  • Is there an effective usage policy in place for how devices can and cannot be used?
  • What are the guidelines for storing corporate data on personal mobile devices?
  • What is the process for transferring corporate data from the personal device to another place?

The solution to addressing all of these concerns is to employ an effective Enterprise Mobility Management solution that has an implementation plan for all devices. Some important features to look for when choosing an EMM solution:

  • The ability to enable valuable separation and tracking of data.
  • A universal usage policy in place for employees who bring their own devices
  • Inclusion of the correct antivirus protocols and software.
  • Monitoring capabilities to understand the data that is being used and processed by all employees. For example, if using mobile devices for personal use during company time is a concern, an EMM service should be able to address the issue by providing a monitoring solution by user and timeframe.

The Right Approach

A flexible and all-inclusive approach to EMM can bridge the gaps that many organizations are facing today, and support the same level of productivity for all employees, no matter what operating system or hardware their phone, tablet or laptop is using. An effective EMM approach should bring a comprehensive standard for all apps, customer relationship management solutions, email, and other mobile deployments.

]]>
Develop Information Technology Strategy | Managed IT Support https://www.oneneck.com/blog/the-importance-of-developing-an-information-technology-strategy/ Fri, 12 Feb 2016 18:00:00 +0000 https://www.oneneck.com/blog/the-importance-of-developing-an-information-technology-strategy/ Network technology is continually changing, as are the regulatory and security issues associated with operating any enterprise system. An effective networking strategy has to extend beyond allocating the budget for system maintenance and upgrades. It must also consider how emerging technologies, such as the cloud, will affect your network and integrate into your infrastructure. To […]]]>

Network technology is continually changing, as are the regulatory and security issues associated with operating any enterprise system. An effective networking strategy has to extend beyond allocating the budget for system maintenance and upgrades. It must also consider how emerging technologies, such as the cloud, will affect your network and integrate into your infrastructure. To stay current with the latest technology, and lay the foundation for the future, you have to have an effective IT strategic roadmap in place, a roadmap that takes into consideration the myriad hybrid IT options in today’s market and lays out the best plan for your organization.

Without an IT strategic plan, companies often find themselves trying to “make do” with outdated technology, patching obsolete software and struggling to get one more year of service from an outdated server or application. The problem with the “waiting until it fails” approach is that any enterprise system failure tends to have a ripple effect; failure of one component or application tends to highlight other weaknesses or obsolete systems in the enterprise.

With an IT strategic plan in place you have a blueprint for maintenance and upgrades, including system interdependencies and a reasonable timeframe for system implementation. Now, rather than reacting to problems, you follow the proactive plan (your blueprint) to improve operations and support broader, corporate objectives.

Not sure how to begin developing a corporate blueprint? The OneNeck Professional Services team has a well-defined process and can help you create an effective IT strategic plan. Here are just a few of the recommended steps:

  1. Review strategic objectives – Before you can develop a plan you have to have goals. The objectives the IT department sets for enterprise upgrades and expansion have to align with corporate objectives. Any IT initiative needs to assess the project’s goals in light of larger objectives. By reviewing and listing IT strategic objectives at the outset, you can be sure the IT goals support corporate objectives and priorities.
  2. Executive agreement – Once you determine the goals for the IT strategic plan, you want to be sure those goals are sanctioned by the operational and executive staff. Soliciting senior management buy-in at the outset eliminates problems later and helps protect your budget. Rather than holding a team meeting, consider approaching the senior team individually; they will be more likely to open up and offer more candid responses. Be prepared with the facts – SWOT (strengths, weaknesses, opportunities tactics) analyses and results from the personal interviews. Executive agreement up front is key in your roadmap success.
  3. Create metrics – As part of the plan, you need to create metrics for objectives. As with MBOs, you want to establish measurable goals and create a timeline for achievement. You can always adjust metrics later if need be, but having a yardstick to measure progress is essential to success.
  4. Infrastructure assessmentInventory your current enterprise infrastructure to determine what assets are already in place that can be used as part of the IT strategic plan. This way, you’ll have a comprehensive understanding of your enterprise components and capabilities before mapping out changes to the infrastructure.
  5. Gap review – Now that you have a list of agreed-upon goals and objectives and an inventory of existing systems, where are the gaps? What hardware and software are missing to help you achieve your goals? Are there system upgrades or swap-outs required? Prioritize the missing elements into “must have” and “nice to have” and then (??) assess the interdependencies within the infrastructure.
  6. Develop a timeline and budget – Now that you have an inventory of objectives and the technology needed to achieve those objectives you can develop a step-by-step timeline for deployment over time. Consider deploying the highest priority system(s) first and adding new hardware, software and infrastructure along the way. You also need to consider system interdependencies so you can add or upgrade technology in the right order. With the timeline in place, you can establish a budget for deployment, mapping costs to implementation goals from year to year.

Of course, the IT strategic plan won’t be static. Changes in corporate goals and objectives will impact strategic IT implementations. Other factors, such as changes in the economy and new technology could also require adjustments to your strategy. Your plan needs to be fluid to accommodate changing conditions.

To keep your IT strategic plan current, IT management has to be included in strategic corporate discussions. The vision of the company needs to be reflected and supported by the IT strategic plan. Using external resources and experts, such as OneNeck’s Professional Services (PS) team, can simplify the development of a comprehensive IT strategy plan and provide an independent perspective. The OneNeck team offers a full complement of design and implementation, IT assessment and technology consulting services. The PS team works closely with your company to understand our business and IT needs in order to develop IT strategies that maximize ROI while minimizing total cost of ownership.

If you want to stay current with the latest technology while minimizing risk, reducing delays and reducing costs, contact the OneNeck Professional Services team.

]]>
Understanding Cisco iWAN https://www.oneneck.com/blog/understanding-cisco-iwan/ Thu, 04 Feb 2016 18:00:00 +0000 https://www.oneneck.com/blog/understanding-cisco-iwan/ Many businesses rely heavily on a WAN (wide area network) for processing transactions, providing visibility and enabling collaboration, especially when they operate in multiple locations or branches. Some of these organizations are finding that the load exceeds the capabilities of their existing infrastructure. Rather than replace existing networks or adding new bandwidth, many savvy organizations […]]]>

Many businesses rely heavily on a WAN (wide area network) for processing transactions, providing visibility and enabling collaboration, especially when they operate in multiple locations or branches. Some of these organizations are finding that the load exceeds the capabilities of their existing infrastructure. Rather than replace existing networks or adding new bandwidth, many savvy organizations are opting for installation of Cisco’s intelligent WAN, also known as iWAN.

What is Intelligent WAN?

An iWAN consists of specialized routers that incorporate advanced traffic control and security features. The new features allow traffic to be routed over the Internet, cellular connections or other low cost links to provide increased bandwidth without compromising the reliability, performance or security of the existing network. As a result, the business has the capabilities of an MPLS WAN (Multi-protocol label switching) at a more affordable cost. The best intelligent networks, including Cisco’s iWAN, provide QoS (quality of service), VPN tunneling and WAN optimization for application performance on a par with a well-tuned traditional WAN.

Multiple iWAN configurations exist to conform to a company’s specific requirements and setup. In the first two cases, all traffic is routed to the headquarter’s network for further routing decisions and to route Internet traffic.

  • An iWAN may use traditional WAN capabilities coupled with a VPN over the Internet as a transport for key workloads to provide bandwidth to support SLAs. This setup is known as a hybrid iWAN.
  • A dual-Internet iWAN design combines services from two ISPs to balance workloads, costs and performance.

The third configuration is known as DIA (direct Internet access). This option forwards branch data directly to the Internet without requiring it to pass through the headquarters routing first. This may reduce latency and improve performance, but it requires additional security measures to prevent intrusion and protect data.

What Are the Components of Cisco iWAN?

An iWAN typically consists of specialized routers deployed at branches and connected to the headquarters WAN. These specialized routers have the ability to use the Internet or cellular connections to transport data, rather than requiring a traditional WAN. These branch routers are usually deployed and managed remotely using a network management tool designed for the purpose.

Although there are many applications available for iWAN management, Cisco Prime is the industry leader because of its rich performance data, simple deployment and remote management capabilities. In addition, Cisco Integrated Services Routers are designed specifically for use at branch locations of the distributed organization. They offer increased bandwidth to support increasing demands for public and private cloud applications, mobile users and video conferencing or unified communications. These routers include remote management capabilities so there is little or no need for on-site IT resources. Since many branches are short on budget and space, they also have a small footprint and a low TCO.

What Are the Benefits of Cisco iWAN?

  • Affordable: iWAN uses less expensive WAN links such as Internet or wireless so you can increase bandwidth and rollout new services cost-effectively.
  • Transport independence: the solution uses a dynamic multipoint VPN-overlay across all connections to create a single routing domain.
  • Intelligent path control: looks at application type, performance, policies, and path status to dynamically control data packets.
  • Optimized application performance: provides tools for improved visibility of applications to enable better performance and workload balancing.
  • Secure connectivity: incorporates VPN, firewalls, network segmentation and other tools to ensure better network security.
  • Simplified management: uses a single platform to manage the entire network across all platforms and locations.
  • Fast ROI: many companies find the solution pays for itself in just a few months.
  • Reliability: most sites enjoy 99.999 percent uptime.

To learn more about whether an iWAN solution can help your company roll out critical services to branches at a reasonable cost, or to affordably use the cloud for critical business applications without adding expensive and complicated network infrastructure, contact us today.

]]>