Monday, April 25, 2016

SWOT Analysis

SWOT is a strategic planning tool used to evaluate the strength, weakness, opportunities and threats to a project or a new idea thus evaluating the way to carve the result. Strength and weaknesses are considered the internal factors, while opportunities and threats are external that can have an effect on the organization or on the project. It is an important part of the project planning process in current business world.



Strengths include the attributes of the organization that would help achieve the project objective. These would also include the advantages of the result, profit it brings the organization, uniqueness of the project. Weaknesses would include the attributes of the organization that stop achievement of the project objective. Some of the factors which would come under this label are expertise unavailability, budget limitations and things to avoid in market feasibility. Opportunities are those external conditions that help achieve the project objective. It would cover to analyze the trends in market and its potential, technology and infrastructure development, market demand and R&D. Threats are the external conditions that could damage the project. Some of the factors in threats are environment, economy and its seasonal effects, obstacles with respect to field and feasibility to current market. Business model now a days also use TOWS which emphasizes the external environment, while SWOT focuses on the internal environment.

The SWOT analysis is typically conducted using a four square template where one would list down all the Strengths, weaknesses, Opportunities and Threats in each box. This is achieved through the brainstorming session among the group to capture the factors in each box respectively. Then a finalized version is created for review. This would help develop short-term and long-term strategies for the business/project. This would help us generate or maximize the positive influences on the business/project and minimize the negative influences. The SWOT analysis thus forms into a simple but useful framework for analyzing and kicking off the strategy formulation to make the project a successful one. One has to make sure when carrying out the SWOT analysis to be realistic and rigorous at right levels. Thus SWOT analysis forms into regular strategy meeting among the group to achieve what business requires at appropriate time.

Friday, April 22, 2016

Force Field Analysis

Force-field analysis is one of the primary principle used in business development, process management, decision making scenarios and social sciences and was found by one of the pioneer in the field of social sciences, Kurt Lewin. This has time and time proved many business models how powerful a simple methodology which overviews different forces acting on a potential change and for assessing their source and strength. The principle of force field analysis is that in any change/situation (driving forces) is at any given condition/point because of counterbalancing forces (restraining forces) are keeping it 
that way.


The driving forces are the forces acting on a situation that are supporting the state or condition and the restraining forces are those restraining the driving forces. The approach is to achieve the equilibrium in change situations – which is driving forces to be stronger than the restraining forces. Lewin’s model included 3 important steps. First one is unfreezing, which is reducing the strength of forces that has been maintaining the current equilibrium. Next one is moving, which develops new organizational values, attitudes, ideas that will help the situation move on. Final one is refreezing, which stabilizes the situation after the changes implemented.

The force field analysis process is implemented on a particular issue or change with the group of people analyzing the situation. The members of the group would identify and list down the driving and restraining forces affecting the issue or change which needs to be implemented. But the users need to define the target of change before outlining the forces affecting it. The target of change is written in the box and all the driving forces are written on it left side, while the resisting forces are written on the right side. The driving and resisting forces are then sorted out with the magnitude ranging from one (lowest) to five (highest). This would help the team to analyze the imbalance in the equilibrium. Then the team would researches around the forces and would determine the changes needed to strengthen then driving forces and weaken the restraining forces. This helps the team to record and review where there is consensus on an action or way forward. The team would finalize what actions/plan needs to be done/changed to achieve the situation/change intended. This Field-force analysis also helps to create alternate methods to support value based management eventually.

Tuesday, April 12, 2016

Portable Modular Data Center

Portable Modular Data Center (PMDC) is new approach to the modular and efficient data center solution and is the part of the IBM Data center family. This is done by taking the entire data center and building it inside the containers following ISO standard. It can be shipped easily across the countries or to other countries by truck, ship and in some unique cases through plane. Main reason to build it inside the container is it can be stored in controlled environment, thereby making its installation easier and also provide versatile movement for any purpose. This solves the problem to temporarily provide support to data center which might need expansion, recovery or replacement.


An example of PDMC is shown, which is 20 x 20 x 8 feet of dimension, but also has an expandable option to fit more IT equipment inside the container based on client’s requirements. The entire container is well insulated and fire protected, RF insulated and electromagnetic insulated, thus protecting the data from the devices and the equipment from damaging. The sensors inside the container also record data of people entering, the temperature, and humidity. They are built using the rack system on the rail for maintaining and accessing the devices, cables and equipment. It has open IT architecture which enables to install any IT equipment and it doesn’t need to be IBM equipment alone. There is a separate chiller equipment, UPS and separate heat exchangers which fits in nicely in the container which provide the adequate temperature for the devices. Traditionally it was possible by a separate device but IBM managed it by partnering with other leading suppliers to fit into this new integrated concept of PMDC.

The detection and suppression system helps taking the air samples constantly that checks for particles in air which might cause fire and if it senses it release the gas to avoid it. On the other side we see the main power panel which contains main circuit breakers and its branches, transient voltage suppressor to tap the incoming power supply for its performance to the UPS. All these helped in developing the turn-key data center within the container. Originally this solution was intended towards small market such as oil exploration, military operation but found people required this during temporary space during expansion or repair time smoothly. In order to operate this PMDC, three essentials are needed – electric power to operate, water source for the humidifier and the network connection to connect to the outer world. The typical design of this PMDC has PUE of 1.3 when compared to the traditional ones which has 2.5 to 2.7. The cost of 17 racks container would be around 500-600 thousand dollars.  

Friday, April 8, 2016

Data migration checklist

 Data centers being a complicate system with software and hardware combined does require proper migration checklist to avoid risks of data loss, business failure and disruptions. This article explains about the checklist/guidelines for data center migration checklist. To start with first step, which is the reason for migration. A deep analysis of reasons for migrating should be understood along with the potential challenges an IT business will face. This would help the company saving more money which can be achieved along with Data consolidation and right sizing by combining the systems. Second step would be the clear cut plan mapped out for migration. This would include finances to be considered along with the limiting factors such as system availability, security of the data and new systems for potential growth in future. Next step would be getting everyone on board. Because a mere organizational change would affect the growth and future stakes of the company. Team effort would reduce panic the team would need to handle to migrate to the new system. It is also important that other employees knows about the migration deadlines and also address their concerns are well supported.

Fourth step forms the important core which is to identify the support systems such as hardware, software, network, servers, cooling and power systems and the data involved in the migration. This would help the system migrating smoothly without losing data. Fifth step would be identifying the data interruption/downtime during migration. This would though involve more planning with different functionaries and the windows of backup to provide help. Sixth step would developing the contingency plan and keep up with what may arise during migration. This will lessen the impact of problems arising during migration and also reducing the downtime. This would also include how much business would spend on devices to support and also form the future system. Next step would be dealing with the small stuffs such as enough support from staffs/people during the migration, unexpected delay and availability of devices at the right time.

Eight step would be taking baby steps and not giant leaps. Since the data transfer would require to happen in different steps, testing has to happen at different steps such as installation, deployment and dry run testing. IT team would also test the backup systems before making the change. Ninth step would be handling of the old system/devices for which a detailed decommissioning and rebuilding plan which will reduce the E-waste and also have old data wiped out completely. The final step would be the Process Update. This has to be done by managers who would be documenting the procedures/processes to help the staff/employees getting accommodated with the new system. When this checklist is followed by the team, the business would benefit from dealing with any problems arising during the migration. 

Monday, April 4, 2016

What Does Open Source Software Mean to US Government?

The US Government has shown its love for Open Source Software. On March 10 the White House published a blog post that declared a new campaign designed to increase the use of open source by the government. It also drafted a policy document on source code, which is open for comment on GitHub. According to the government, the new drive toward open source reflects an effort to save money, avoid failures and help easier collaboration. “We can save taxpayer dollars by avoiding duplicative custom software purchases and promote innovation and collaboration across Federal agencies,” the blog post said.



For supporters, the government’s promise to become more friendly toward the idea of open source is no doubt a foot in the right direction. The only problem is, the way the government is defining Open Source and how they are setting it in their terms. The blog post simply mentioned  sharing source code between federal agencies, and releasing a fragment of the code to the public. Also the only open source projects that the blog post mentions are ones that center around open data more than open code. It revolves around information and databases of it, not codes. The use of open source coding to generate the web interface is not to be confused and related to what open source coding really stands for.

In that respect, the government seems to be confusing open data with open source code which is a very different thing from developing and sharing source code publicly. More engagement by the federal government with open source is not a bad thing, even if the government doesn’t get it totally right. Yet by failing both to define open source and to appreciate the significant difference between open source code and open datasets, the government is altering to its own satisfaction, the meaning of Open Source itself. Considering data and information available as open source without considering the source code that is used for everything to be built upon and is the actual underlying structure that runs these data warehouses and reserves, the value of open source starts dropping to a point where it becomes insignificant. 

How to Choose a Micro-segmentation Solution to Protect VMs

It is a fact that virtualization is looked upon to reduce the physical boundaries between applications and work and making it virtual and dynamic. There has been a hunt by the companies for a long duration now in search of a method or technology that can provide similar effects for security control in a cloud and control the traffic through its data centers. Thus came Micro-segmentation. It uses software technology to manage virtual machines, wherein the VM are present on different or the same servers, different grouping, isolation and more and can still be all controlled effectively and access control can be applied.



To ensure security, traditional networks are usually divided into security zones, where groups of assets such as servers or desktops are put on different network segments. Security policies are then performed over the traffic between these zones. The zones can be set up as needed for departmental boundaries, functions or for security. This division creates regions where access violation doesn't affect or penetrate the other zones as quickly and effectively and in turn doesn't hinder the daily  and regular usage of the cloud and it's usage performance.

Micro-segmentation is not a complete fix though. There are issues like Virtualization Security that are yet to be answered by it. Micro-segmentation is held accountable on multiple stands, some of them being the technology needing to offer the same level of elasticity that the data center provides, handling both the change in the size of the physical infrastructure, as well as the change of workloads that run on the infrastructure. Also the need to work with a diverse set of hardware and software environments stands strong. In order to provide on-demand security in the virtualized environment, it is necessary for the Micro-segmentation to support changes to security functionalities without changes in the infrastructure. To end this off, the Micro-segmentation solution needs to integrate well with cloud orchestration and avoid intrusive changes to the cloud infrastructure.

Google Data Center

This video describes about the Google data center located at Dalles, OR, where even most of the Google employees can’t get in due to security clearance. Site Reliability Engineering Team writes and maintains the software system to maintain perfect running of the services and avoid failures. It has 24/7 team of SREs on call to tackle any problems related to unexpected failures though highly redundant power, networking and serving domain and thus preventing the loss of cluster and minimizing the impact.



The entry into the data center floor is protected by biometric iris scanner and circle lock door and would require dual authentication to get in. The data center floor showing lot of servers forming a single cluster. Management of these servers would require several tools such as Borg, Colossus and Spanner along with Kubernetes, Google Cloud Storage and Big Query by Google engineers and Cloud customers. One of the network engineer explains working of Hardware Ops to expand the data center to deploy additional machines inside the building. And Google had preplanned for all expansions of this quantity using Jupiter, which is their data center technology providing hierarchical design using software defined networking principles. The Single building supporting 75000 machines require lot of optical cable fibers and carry over one petabit per second of bandwidth. This has helped Google to reliably access storage with low latency and high throughput. It is also noted that Google runs on B4, which is their private network and has been growing faster than our Internet-facing network. This connects all of the Google data centers and allows smooth access of resources across them. Next, the Google data center’s safety protocol of data storage is explained. On a daily basis the hard drives are shredded and wiped out to protect our data when there is hard drive and SSD failure. But the protocol calls for strict chain of custody from the time of hard drive being removed and commissioned for a new one.

Next comes the important part where cooling and powering the infrastructure is explained in the video, since lot of heat are generated inside the server area and has to be removed. The cooling plant on the site has two water loops, the condenser and process water loops (differentiated by different colors). The process water loop take the heat off the server floor and the condenser water loop takes the cold water from the basin to the heat exchangers. Power usage efficiency (PUE) in most of the Google data centers were 100% with very low Power overhead. It also uses a chiller which at times required to be used to keep the water in the cooling tower at the desired temperature. It is this water which is used in condenser water loop.  This cooling tower would normally use evaporation to cool the water at a faster rate. Finally the Google-owned power station is shown, which powers the Cloud. This is where high voltage power enter the site before going to distribution centers. And also uses multiple generators to prevent outage. Best part is all the power is coming from nearby hydroelectric power station and thus achieving 100%carbon neutral.