Monday, April 25, 2016

SWOT Analysis

SWOT is a strategic planning tool used to evaluate the strength, weakness, opportunities and threats to a project or a new idea thus evaluating the way to carve the result. Strength and weaknesses are considered the internal factors, while opportunities and threats are external that can have an effect on the organization or on the project. It is an important part of the project planning process in current business world.



Strengths include the attributes of the organization that would help achieve the project objective. These would also include the advantages of the result, profit it brings the organization, uniqueness of the project. Weaknesses would include the attributes of the organization that stop achievement of the project objective. Some of the factors which would come under this label are expertise unavailability, budget limitations and things to avoid in market feasibility. Opportunities are those external conditions that help achieve the project objective. It would cover to analyze the trends in market and its potential, technology and infrastructure development, market demand and R&D. Threats are the external conditions that could damage the project. Some of the factors in threats are environment, economy and its seasonal effects, obstacles with respect to field and feasibility to current market. Business model now a days also use TOWS which emphasizes the external environment, while SWOT focuses on the internal environment.

The SWOT analysis is typically conducted using a four square template where one would list down all the Strengths, weaknesses, Opportunities and Threats in each box. This is achieved through the brainstorming session among the group to capture the factors in each box respectively. Then a finalized version is created for review. This would help develop short-term and long-term strategies for the business/project. This would help us generate or maximize the positive influences on the business/project and minimize the negative influences. The SWOT analysis thus forms into a simple but useful framework for analyzing and kicking off the strategy formulation to make the project a successful one. One has to make sure when carrying out the SWOT analysis to be realistic and rigorous at right levels. Thus SWOT analysis forms into regular strategy meeting among the group to achieve what business requires at appropriate time.

Friday, April 22, 2016

Force Field Analysis

Force-field analysis is one of the primary principle used in business development, process management, decision making scenarios and social sciences and was found by one of the pioneer in the field of social sciences, Kurt Lewin. This has time and time proved many business models how powerful a simple methodology which overviews different forces acting on a potential change and for assessing their source and strength. The principle of force field analysis is that in any change/situation (driving forces) is at any given condition/point because of counterbalancing forces (restraining forces) are keeping it 
that way.


The driving forces are the forces acting on a situation that are supporting the state or condition and the restraining forces are those restraining the driving forces. The approach is to achieve the equilibrium in change situations – which is driving forces to be stronger than the restraining forces. Lewin’s model included 3 important steps. First one is unfreezing, which is reducing the strength of forces that has been maintaining the current equilibrium. Next one is moving, which develops new organizational values, attitudes, ideas that will help the situation move on. Final one is refreezing, which stabilizes the situation after the changes implemented.

The force field analysis process is implemented on a particular issue or change with the group of people analyzing the situation. The members of the group would identify and list down the driving and restraining forces affecting the issue or change which needs to be implemented. But the users need to define the target of change before outlining the forces affecting it. The target of change is written in the box and all the driving forces are written on it left side, while the resisting forces are written on the right side. The driving and resisting forces are then sorted out with the magnitude ranging from one (lowest) to five (highest). This would help the team to analyze the imbalance in the equilibrium. Then the team would researches around the forces and would determine the changes needed to strengthen then driving forces and weaken the restraining forces. This helps the team to record and review where there is consensus on an action or way forward. The team would finalize what actions/plan needs to be done/changed to achieve the situation/change intended. This Field-force analysis also helps to create alternate methods to support value based management eventually.

Tuesday, April 12, 2016

Portable Modular Data Center

Portable Modular Data Center (PMDC) is new approach to the modular and efficient data center solution and is the part of the IBM Data center family. This is done by taking the entire data center and building it inside the containers following ISO standard. It can be shipped easily across the countries or to other countries by truck, ship and in some unique cases through plane. Main reason to build it inside the container is it can be stored in controlled environment, thereby making its installation easier and also provide versatile movement for any purpose. This solves the problem to temporarily provide support to data center which might need expansion, recovery or replacement.


An example of PDMC is shown, which is 20 x 20 x 8 feet of dimension, but also has an expandable option to fit more IT equipment inside the container based on client’s requirements. The entire container is well insulated and fire protected, RF insulated and electromagnetic insulated, thus protecting the data from the devices and the equipment from damaging. The sensors inside the container also record data of people entering, the temperature, and humidity. They are built using the rack system on the rail for maintaining and accessing the devices, cables and equipment. It has open IT architecture which enables to install any IT equipment and it doesn’t need to be IBM equipment alone. There is a separate chiller equipment, UPS and separate heat exchangers which fits in nicely in the container which provide the adequate temperature for the devices. Traditionally it was possible by a separate device but IBM managed it by partnering with other leading suppliers to fit into this new integrated concept of PMDC.

The detection and suppression system helps taking the air samples constantly that checks for particles in air which might cause fire and if it senses it release the gas to avoid it. On the other side we see the main power panel which contains main circuit breakers and its branches, transient voltage suppressor to tap the incoming power supply for its performance to the UPS. All these helped in developing the turn-key data center within the container. Originally this solution was intended towards small market such as oil exploration, military operation but found people required this during temporary space during expansion or repair time smoothly. In order to operate this PMDC, three essentials are needed – electric power to operate, water source for the humidifier and the network connection to connect to the outer world. The typical design of this PMDC has PUE of 1.3 when compared to the traditional ones which has 2.5 to 2.7. The cost of 17 racks container would be around 500-600 thousand dollars.  

Friday, April 8, 2016

Data migration checklist

 Data centers being a complicate system with software and hardware combined does require proper migration checklist to avoid risks of data loss, business failure and disruptions. This article explains about the checklist/guidelines for data center migration checklist. To start with first step, which is the reason for migration. A deep analysis of reasons for migrating should be understood along with the potential challenges an IT business will face. This would help the company saving more money which can be achieved along with Data consolidation and right sizing by combining the systems. Second step would be the clear cut plan mapped out for migration. This would include finances to be considered along with the limiting factors such as system availability, security of the data and new systems for potential growth in future. Next step would be getting everyone on board. Because a mere organizational change would affect the growth and future stakes of the company. Team effort would reduce panic the team would need to handle to migrate to the new system. It is also important that other employees knows about the migration deadlines and also address their concerns are well supported.

Fourth step forms the important core which is to identify the support systems such as hardware, software, network, servers, cooling and power systems and the data involved in the migration. This would help the system migrating smoothly without losing data. Fifth step would be identifying the data interruption/downtime during migration. This would though involve more planning with different functionaries and the windows of backup to provide help. Sixth step would developing the contingency plan and keep up with what may arise during migration. This will lessen the impact of problems arising during migration and also reducing the downtime. This would also include how much business would spend on devices to support and also form the future system. Next step would be dealing with the small stuffs such as enough support from staffs/people during the migration, unexpected delay and availability of devices at the right time.

Eight step would be taking baby steps and not giant leaps. Since the data transfer would require to happen in different steps, testing has to happen at different steps such as installation, deployment and dry run testing. IT team would also test the backup systems before making the change. Ninth step would be handling of the old system/devices for which a detailed decommissioning and rebuilding plan which will reduce the E-waste and also have old data wiped out completely. The final step would be the Process Update. This has to be done by managers who would be documenting the procedures/processes to help the staff/employees getting accommodated with the new system. When this checklist is followed by the team, the business would benefit from dealing with any problems arising during the migration. 

Monday, April 4, 2016

What Does Open Source Software Mean to US Government?

The US Government has shown its love for Open Source Software. On March 10 the White House published a blog post that declared a new campaign designed to increase the use of open source by the government. It also drafted a policy document on source code, which is open for comment on GitHub. According to the government, the new drive toward open source reflects an effort to save money, avoid failures and help easier collaboration. “We can save taxpayer dollars by avoiding duplicative custom software purchases and promote innovation and collaboration across Federal agencies,” the blog post said.



For supporters, the government’s promise to become more friendly toward the idea of open source is no doubt a foot in the right direction. The only problem is, the way the government is defining Open Source and how they are setting it in their terms. The blog post simply mentioned  sharing source code between federal agencies, and releasing a fragment of the code to the public. Also the only open source projects that the blog post mentions are ones that center around open data more than open code. It revolves around information and databases of it, not codes. The use of open source coding to generate the web interface is not to be confused and related to what open source coding really stands for.

In that respect, the government seems to be confusing open data with open source code which is a very different thing from developing and sharing source code publicly. More engagement by the federal government with open source is not a bad thing, even if the government doesn’t get it totally right. Yet by failing both to define open source and to appreciate the significant difference between open source code and open datasets, the government is altering to its own satisfaction, the meaning of Open Source itself. Considering data and information available as open source without considering the source code that is used for everything to be built upon and is the actual underlying structure that runs these data warehouses and reserves, the value of open source starts dropping to a point where it becomes insignificant. 

How to Choose a Micro-segmentation Solution to Protect VMs

It is a fact that virtualization is looked upon to reduce the physical boundaries between applications and work and making it virtual and dynamic. There has been a hunt by the companies for a long duration now in search of a method or technology that can provide similar effects for security control in a cloud and control the traffic through its data centers. Thus came Micro-segmentation. It uses software technology to manage virtual machines, wherein the VM are present on different or the same servers, different grouping, isolation and more and can still be all controlled effectively and access control can be applied.



To ensure security, traditional networks are usually divided into security zones, where groups of assets such as servers or desktops are put on different network segments. Security policies are then performed over the traffic between these zones. The zones can be set up as needed for departmental boundaries, functions or for security. This division creates regions where access violation doesn't affect or penetrate the other zones as quickly and effectively and in turn doesn't hinder the daily  and regular usage of the cloud and it's usage performance.

Micro-segmentation is not a complete fix though. There are issues like Virtualization Security that are yet to be answered by it. Micro-segmentation is held accountable on multiple stands, some of them being the technology needing to offer the same level of elasticity that the data center provides, handling both the change in the size of the physical infrastructure, as well as the change of workloads that run on the infrastructure. Also the need to work with a diverse set of hardware and software environments stands strong. In order to provide on-demand security in the virtualized environment, it is necessary for the Micro-segmentation to support changes to security functionalities without changes in the infrastructure. To end this off, the Micro-segmentation solution needs to integrate well with cloud orchestration and avoid intrusive changes to the cloud infrastructure.

Google Data Center

This video describes about the Google data center located at Dalles, OR, where even most of the Google employees can’t get in due to security clearance. Site Reliability Engineering Team writes and maintains the software system to maintain perfect running of the services and avoid failures. It has 24/7 team of SREs on call to tackle any problems related to unexpected failures though highly redundant power, networking and serving domain and thus preventing the loss of cluster and minimizing the impact.



The entry into the data center floor is protected by biometric iris scanner and circle lock door and would require dual authentication to get in. The data center floor showing lot of servers forming a single cluster. Management of these servers would require several tools such as Borg, Colossus and Spanner along with Kubernetes, Google Cloud Storage and Big Query by Google engineers and Cloud customers. One of the network engineer explains working of Hardware Ops to expand the data center to deploy additional machines inside the building. And Google had preplanned for all expansions of this quantity using Jupiter, which is their data center technology providing hierarchical design using software defined networking principles. The Single building supporting 75000 machines require lot of optical cable fibers and carry over one petabit per second of bandwidth. This has helped Google to reliably access storage with low latency and high throughput. It is also noted that Google runs on B4, which is their private network and has been growing faster than our Internet-facing network. This connects all of the Google data centers and allows smooth access of resources across them. Next, the Google data center’s safety protocol of data storage is explained. On a daily basis the hard drives are shredded and wiped out to protect our data when there is hard drive and SSD failure. But the protocol calls for strict chain of custody from the time of hard drive being removed and commissioned for a new one.

Next comes the important part where cooling and powering the infrastructure is explained in the video, since lot of heat are generated inside the server area and has to be removed. The cooling plant on the site has two water loops, the condenser and process water loops (differentiated by different colors). The process water loop take the heat off the server floor and the condenser water loop takes the cold water from the basin to the heat exchangers. Power usage efficiency (PUE) in most of the Google data centers were 100% with very low Power overhead. It also uses a chiller which at times required to be used to keep the water in the cooling tower at the desired temperature. It is this water which is used in condenser water loop.  This cooling tower would normally use evaporation to cool the water at a faster rate. Finally the Google-owned power station is shown, which powers the Cloud. This is where high voltage power enter the site before going to distribution centers. And also uses multiple generators to prevent outage. Best part is all the power is coming from nearby hydroelectric power station and thus achieving 100%carbon neutral.  

Why Google Doesn’t Outsource Data Center Operations

To start off, Outsource means contract (work) out. Now we know human error is one of the root issues revolving around Data Center outages. That just means its avoidable, yet not preventable and is at the expense of the time of the user and the money of the company. Any mistakes just mean a ton of money going down as a loss. About 70 percent of data center reliability incidents are caused by human error on average based on a study by Uptime Institute, which shows only 15.4 percent of incidents at Google data centers were caused by human error over the past two years.
To bring this in terms of Google, we can't. Why? Because they go to the experts, the one percent. Now this one percent is not like the one percent you may find in places like Occupy Wall Street. These are, in Joe Kava, Google’s top data center operations exec, words, " highly qualified people, they are systems thinkers,” he said. “They understand how systems interact and how they work together”.  In fact, very few Googlers are even allowed to visit the company's data centers in the first place which in numerical terms can be less than one percent. The only time someone does get in there is when there is some specific business reason to be there, thus no one is there without a cause or plausible need, ever. Those responsible for the data center handling and operations work along with those who build and design the facilities. This allows consistent feedback and ideas being transferred between the teams, and thus each time a new data center is made, it's better than the previous one.  

     


The reason that Google doesn't outsource is pretty simple. The industry deals with the contractor who hands over the details based on design, the drawings and blueprints, manuals and so on to data center operators. Now the operator usually is not employed by the owner, but instead outsourced to the lowest bidder. This results in the owner having no actual control over the quality of service available nor the assurance that if something goes wrong, the operators will actually do something about it. 

Saturday, April 2, 2016

Facebook Data Center

This video discusses about Facebook data center. Handling the profiles of 1 in 7 people on earth needs staggering scale of devices and technology to handle the data. The data center of this 104 billion dollar company and the company which grows at the rate of 100 million users every six months is explained with immense information and its challenge. In order to handle such a huge data would require a monster data center which was first opened in Prineville, OR which is having a size of three football stadiums together (memory chip the size of 300,000 sq ft and costed 100s of million dollars. It handles the data flying from your computer to internet at the speed of light and vice versa with 21 million feet of fiber optic cable.



Kenpachi, GM of the data center explains the working nature of the data center in this video clearly. The data center makes sure the data transfer from internet to data center to your laptop in milliseconds. The video shows the fine illustration of never ending servers in different racks. It uses 30 megawatts of electricity on tap inorder to prevent running out of power. They also use huge diesel power generators as backup to prevent power outage thereby preventing data loss. And these huge generators kick in immediately when there is power loss at all. The massive 7 room rooftop state of the art (chiller less data center) natural air conditioning system is used to cool the heat generated from the servers and acts as a heat sink. The cool air from high plains of Oregon is sucked in and mixed with warm air to regulate the temperature of the server area thus controlling the overheating of the servers. On hot days, the Prineville data center is designed to use evaporative cooling instead of a chiller system.



To keep up with increase in users and data available, thousands of servers and memories are brought in daily. They use both Intel and AMD chips with custom made motherboards and chassis, allowing the use of larger heat sinks and fans to improve cooling efficiency. In the event of a server failing the technicians get a notice of which one failed but is very difficult to find it in this massive place. The facebook has spent more than $1 billion for the infrastructure and are still running out of space to accommodate its growth. But with this rate of growth in online involvement, they need to better speed up increasing the size.  

Reference: https://www.youtube.com/watch?v=Y8Rgje94iI0

Friday, April 1, 2016

Data Center - Security and Risk Management

Security and Risk management in data center is discussed in this video. The first key challenge is risk management which is explained as layered physical security approach. This is to protect the data from criminals or third party contractors or from the employees who happened to access the data with or without their knowledge. This layered security approach is intended to detect, deter or detain the data from breach. This helps in risk free data management and giving peace of mind to us.



First layer is called perimeter defense which includes video surveillance and fence to protect the area, which would have limited access points and physical barriers. With this security layer, it delays access to the second layer and accessing the point of entry by intruders. Second layer is the clear zone which is the area between the security and the building area. This area is marked by video surveillance to mark intrusions to identify breaches. The third layer is the highest level perimeter security the data center would have and would have the opportunity to prevent unauthorized access towards the fourth layer. This would include key system, video surveillance, card reader, security vestibule, perimeter camera. Fourth layer of security validates the access to the individuals with Digital signage, card reader, mass notification display, IBW antenna and photo badging stations for the power/cooling facilities, data room to all the visitors and contractors. These four layers together combine the physical security of the data center.



Fifth layer of security is the selective profile security for the staff, contractors and visitors to ensure the access is limited and approved to this critical space using motion egress, button egress and data cabinet lock trigger panel. The sixth layer of security is to provide controlled access and accountability directly to the equipment location. The key list of materials list for physical security holds the key and can be challenging based on need. These six layers together mitigates your risk of ineffective protection of the data center’s critical data. The interoperability of components subsystems hold importance to various stakeholders of the critical data.