Wednesday, November 30, 2011

Finale: Services


The final component of an Internest transaction that we are going to look at is the Service.  There are many definitions of what a service is, but for the purpose of this article, a service is a set of standards that software platforms can use to talk to other software platforms to get and transfer information without a human component.  Below is a diagram of a general set of these standards that are used: a Security component, a Reliable Messaging component, the Transaction, the actual Message and it's language (XML in this case), and the Metadata to put it all together.


There are couple commonly used technologies when discussing Services: SOAP, XML, and WSDL.  SOAP and WSDL are both protocols that utilize the XML language.  Soap is "a lightweight protocol for the exchange of information in a decentralized, distributed environment" and acts as a virtual envelope to send data.  WSDL is then used as the diction and syntax to the message that allows completely different applications to communicate with each other, very similar to the Universal Translator in Star Trek.





The idea of a Service has been around a while, they just have not always been publicly available or documented.  Microsoft is infamous for it's products interoperability, and the words "web service" were first documented as being uttered by Bill Gates at a developer's conference.  EDI was the first to try to develop some type of data interaction.  However, it ended up being costly and hard to implement.  Then once the Web starting gaining ground, SOAP was developed, and that opened the door to the Web service we know today.



Additional Resources:
http://en.wikipedia.org/wiki/Web_service
http://www.w3schools.com/webservices/default.asp
http://www.webopedia.com/TERM/W/Web_Services.html
http://www.businessweek.com/technology/content/feb2005/tc2005028_8000_tc203.htm
http://ws.apache.org/
http://www.ibm.com/developerworks/webservices/
http://msdn.microsoft.com/en-us/library/ms950421.aspx
http://www.innoq.com/resources/ws-standards-poster/
http://www.informationweek.com/news/6506480

Episode 5: Data Storage




The last two parts of the Internet transaction have become specialized throughout the years.  The first is data storage on the server level.  Most people understand data storage on your desktop.  Your hard drive space on your desktop limits you to the amount of documents, videos, and pictures you can store.  Well, this premise can be applied to servers as well except now it's even more important to make all the stored information easily accessible and fast.

There are a couple of different ways to go about data storage on the server level.  You can combine it with your application server if you are running a small environment.  Or it can be its own standalone server.  It can be a SQL database server or a regular file structure based system.  Some examples of storage systems include HP's blade storage system or Apple's XSAN.



Over the years, the idea of storage on a server level has grown.  It initially didn't exist; you had to save back to your computer locally.  But just as with the other services that we've looked at, the demand to collaborative and Internet based business demand storage availability on the server level and in came the NAS solution (network attached storage).  There has since been many advancements in the NAS with RAID algorithms, redundancies built-in, and the SAN, the storage area network.

Windows Storage Server Releases:



Additional Resources:
http://en.wikipedia.org/wiki/Computer_data_storage
http://welcome.hp.com/country/us/en/prodserv/storage.html
http://www.open-e.com/service-and-support/products-archive/products/open-e-dss/
http://blogs.technet.com/b/storageserver/archive/2008/06/09/a-brief-history-of-windows-storage-server-releases.aspx
http://en.wikipedia.org/wiki/Storage_area_network

Episode 4: The Application Server

The next part of the transaction I'll be discussing is the Application server component.  An application server is simply as server that provides an environment for various applications to run.  For example, the Microsoft Exchange server is an application server that runs the Microsoft Exchange application on a Windows Server platform.



The options for an application server are many.  There are a couple of main server platforms as we discussed earlier: Microsoft Windows Server, Solaris, and Linux based such as RedHat.  There are many more types and proprietary platforms as well that are developed for specific applications.  Then on top of all of these platforms, you can run your server applications such as  Microsoft SQL Server.



Over the past years, the definition of an application server has been developed, and its place in the schema defined.  The number of applications that can now be provided on a server level as drastically increased as well.  Application server were initially just static web servers, but with the increase in business online and business platforms that needed to be provided online, they have developed into the massive and various platforms they are today.

Additional Resources:
http://en.wikipedia.org/wiki/Application_server
http://www.answers.com/topic/application-server
http://www.theserverside.com/news/1363671/What-is-an-App-Server
http://technet.microsoft.com/en-us/library/cc750283.aspx#XSLTsection130121120120
http://www.javaworld.com/javaworld/javaqa/2002-08/01-qa-0823-appvswebserver.html
http://searchsqlserver.techtarget.com/definition/application-server
http://java.sys-con.com/node/36451

Episode 3: The Web Server


The next stop on our adventure is the Web Server.  The web server is key to storing and retrieving data on the Internet.  Web servers make web pages possible, which without most people would not have a use for the Internet.  A web server is made up of two components: the hardware that it runs on and the web server software itself which turns a regular server into a web server.

A client uses a web server every time they access a website.  For instance, you are reading my blog right now which is hosted on a web server owned and operated by Blogspot.  You more than likely got here either through a Google search (also a website) or via a direct link.  The direct link should clue you in to a few things.  One, you are accessing my server space here: http://rmoorehead-mist7500.blogspot.com/ then if there is anything after the slash you are access an individual file on the server.  If there is nothing after the slash, you are still accessing a file, you just don't know it.  You are accessing the index file.

So in essence, a web server is very much like navigating your desktop with a file structure of folders and individual files.  The magic comes when a web server receives requests and processes them and sends them back out via the Internet.





The hardware requirements for a web server are pretty basic:  a large storage space, a fast processor, and a permanent IP.  If you'd like to read more about the differences between a regular desktop and a web server, please review this article.  The software is where the fun part comes in.  There is huge competition in the web server arena between Microsoft's IIS platform and the Unix-based Apache offering with Sun bringing up the rear with some specialization optimizations.

Web servers just like user's browser clients have developed over the years.  The concept of the web server and a central repository of information was developed by Tim Berners-Lee in 1989.  And due to it's popularity, Berners-Lee initiated the W3C, the World Wide Web Consortium, to regulate the web he helped to build in 1994. Since then, the concept of a web server has grown.  We now can have virtual web servers and web servers based in a cloud environment in essence taking the hardware component out of the picture.



More Resources:
http://en.wikipedia.org/wiki/Web_server
http://www.webdevelopersnotes.com/basics/what_is_web_server.php
http://computer.howstuffworks.com/web-server.htm

Episode 2: The Internet

  

Onward!  We shall now traverse through the Internet!  According to the Oxford English Dictionary, the Internet is "a computer network consisting of or connecting a number of smaller networks, such as two or more local area networks connected by a shared communications protocol." [1]  But I think we all could have guessed at that by now.  The Internet has been compared to the Universe model many times as it's a network of stars.  The Internet is made up of two basic technology components:  protocols and structure.

The Internet is defined by it's protocols as you saw in the Oxford English Dictionary's definition.  It's based around hardware and software level protocols.  The hardware that builds the Internet (and it's networks) include routers and switches.  You can read more about routers in one of my earlier blog posts.  The main software level protocol is the IP (Internet Protocol), which is defined and maintained by the Internet Engineering Task Force (IETF).  More about the protocol suite can be found here, but basically, it's made up of several layers of protocols and standards.  The structure of the Internet is based around the "scale-free" network, which is mathematically explained here but basically, the frequency of use by each node in the network determines how many connections that node has to other nodes.  The structure of the Internet is still being debated, but the scale-free model appears to be the best-fit.



The Internet as we know it began development in the late 1960s, early 1970s, with ARPANET and Mark I.  You can find a timeline of events here if you are curious.  Individual groups in education and government worked to develop local area networks that they then interconnected.  With the birth of email in the late 70s, the idea of a public network was introduced.  Then the IP protocol was defined in the early 1980s to standardized all communications.  Also in the early 1980s, the personal desktop became a reality introducing more nodes to the network.  And it has grown in leaps and bounds since then into what we have today.



More Resources:
http://computer.howstuffworks.com/internet/basics/internet-infrastructure.htm
http://en.wikipedia.org/wiki/Internet#Technology

Episode 1: Client Technologies




     As I mentioned in the crawl, I'm going to be starting a 6 episode discussion reviewing some of the basics of Internet Technology.  I'll begin discussing these topics from the client end of the spectrum through to the data end.  Client side web technology can be a very broad topic, but basically, it's made up of three components: the client's platform, the client's software, and what technology is being utilized locally.



The client's platform can be a desktop, laptop, tablet, or mobile device (basic web capabilities or a smart phone); Windows, Mac, Unix, Android, Symbian.  Here are some neat statistics from the website w3schools.com on the platform market share right now for their website.  Basically, Windows has been a dominant market share for a while, but it's starting to lose ground to mobile devices.  The client's software also must be taken into consideration.  There are multiple Internet browser versions available such as Google Chrome or Mozilla Firefox.  Also a client can use an installed application to access resources on the web such as MySQL Workbench or the Evernote mobile app.  Below is a nice graph indicating where we are now with browser distribution.




The technology that is being utilized locally is always changing.  Some of the staples include HTML/cascading style sheets (css), JavaScript, AJAX, and XML.  These all come together to define  what's called the Web 2.0 phenomenon.  You can read more about that here on HowStuffWorks, but basically, different platforms and software can render these different languages into usable information for a user in a graphical user interface, most of the time.  Some people still enjoy their black screens with white text.


http://en.wikipedia.org/wiki/File:Timeline_of_web_browsers.svg

So how has this all developed over the past few years?  Well, one word could sum it up: drastically!  Most major companies such as Microsoft and Apple put out a new line of products (platforms and software) at least each year so the client technology development had to be constantly changing to keep up.  You can read on HowStuffWorks their articles on Web 1.0 and where they think we are going with Web 3.0.  You can also read about the evolution of HTML and where it's going with HTML5.



Additional Resources:
http://en.wikipedia.org/wiki/Client-side
http://en.wikipedia.org/wiki/Web_2.0
http://en.wikipedia.org/wiki/History_of_the_web_browser




Monday, November 7, 2011

Developing in an Ever Mobile World


This week in class we had the opportunity to listen to Chuck Hudson from Control4 speak about mobile application design.  He opened up his talk with a discussion on the "device market", and it's recent revolution from the mobile way of thinking.  It's not about the fact that it's mobile, it's about the devices that client's are trying to connect with, i.e. smart phones, tablets, TV, car displays.  He spoke a bit about the major market players and the market trends.  Android and iOS are leading the pack in the U.S.  He then detailed out the various markets that individuals apps can fall into:

  • pure application sales
  • outside mobile license
  • advertising supported
  • referral/affiliate fee
  • complimentary model
  • in-app purchases



After that we dug right in to the meat of developing an application, starting with the design and the challenges faced by mobile development.  He shared with us the statistic that "1 in 4 apps downloaded are never used."  And that's huge.  There are so many considerations for determining whether an application is going to succeed such as the orientation, the envisioned usage, the platform, space size restrictions.  He also could not more highly emphasize the need to storyboard and protoype as well as test, test, test,  One prototyping program he recommended was Balsmiq (www.balsmiq.com).  He also highly recommended getting involved in user groups such as those located here: developer.android.com and developer.apple.com/ios.

After that, we broke it down to the platform level.  For the Android, we discusses screen resolution support
(4 generalized sizes; 4 size densities), graphics and layout complexity, device testing and compatibility, and device fragmentation.  When developing for the Android, the environment is open sourced (eclipse and the android sdk).  The key differences include a community fed library and diverse widget availability.  The example that we went over is located here.  When you are ready to distribute your app, you can upload it to a couple of different sites, but mainly the Android Martketplace or Amazon.  The process includes code signing, incorporating any marketing information, and submitting to each site.  You also should think about any legal action you'd like to take before making the application public.


We then spoke about iOS development.  iOS development uses an MVC design pattern that is coded in Apple developed objective C.  The development tools are more mature than the Android's since it's been on the market the longest and include xcode, an interface builder, and instruments.  Apple also has the iOS Ad Hoc Program to help get your product in beta testing.  To release your app to the public, you need to add your app, upload it via the Application Loader, go through the certification process by Apple, then release it when you are ready.  Some common issues experienced during this include improper UI usage, over-throttling web request pulls, and memory leaks.

After that we talked about the considerations that need answering before a deployment.

  • Which OS versions do you support?  A good general rule is the current version plus one.
  • How will licensing be handled?  A tip for handling licensing is to balance the level of licensing with the effort needed.
  • Will you implement localization?  If so, how?  And how will you maintain it in the long run?  In-house, contractor, or community group?
  • Beta Test, Beta Test, Beta Test.


Then we talked about how to measure your applications success.  It's unique to each application and it's developer, but you could gauge it by number of installs, amount of traffic, utilize any statistics available through the market, or even create your own measurement by using built-in logging.  You should also have a process ready for supporting your application.  Always welcome feedback, create new updates, and build in code support mechanisms.  A few tips and tricks that he left us with include:  know that development will always be a continual investment; market yourself, get out there and get noticed; test, test, test; and launch!  Don't hold back.  Always look back on your 1.0 version and be embarrassed.


Services! Get Your Services Here!

So what is a Web Service, you say?  According to the W3C, the World Wide Consortium that develops web standards, "a web service is a software system designed to support inter-operable machine-to-machine interaction over a network. It has an interface described in a machine-process-able format (specifically WSDL [web services descriptive language]). Other systems interact with the Web service in a manner prescribed by its description using SOAP-messages, typically conveyed using HTTP with an XML serialization in conjunction with other Web-related standards."  If that just sounded like a bunch of techno-babble, read on!  And hopefully, I'll explain it in plain English for you. 






A web service is a way that one server can speak to another server without any human interaction.  Many online functions can be done with web services, such as online web payments, fraud detection, and package tracking.  There are two primary categories of web services: the "Big Web" service and the "RESTful" service.
The "Big Web" service utilizes XML (extensible markup language), specifically the WSDL and the SOAP standard, to deliver interoperability for remote procedure calls (RPCs) and messaging integration between systems.  The WDSL, the web service descriptive language, is a way to describe the end points of a service call no matter of the format of the message.  The Simple Object Access Protocal (SOAP) is made up of three part: envelope that defines the architecture, a set of encoding rules, and a definition for representing RPCs.  SOAP can be used with many other protocols but primarily with HTTP and HTTP extensible framework.  A "Big Web" service that utilizes SOAP must have these three components:  a formal contract that defines how the web service will interact, must be able to handle complex nonfunctional requirements, and must be able to run asynchronously.  Here are a few examples of  "Big Web" services in action using AJAX: Big Web Examples (IBM)


The "RESTful" service, Representational State Transfer service, limits its architecture to a smaller, standard group of operations such as GET, POST, and DELETE for HTTP making it simple, lightweight, and fast.  This form of web service primarily focuses on stateful resources and the usage of clean URLs.  "RESTful" services can be created in three ways:  using WSDL and SOAP, as an abstraction on top of SOAP, or without SOAP at all. The design of a "RESTful" service is based around resource identification through URIs, having a uniform interface, using self-descriptive messages, and utilizing stateful interactions through hyperlinks.  Here is a similar example to the "Hello World" "Big Web" service example provided above, but this time developed using the "RESTful" implementation:  RESTful Example (Oracle)


So which one do you use?  When comparing the two types, there are four criteria to review.  The first criteria is how you want to use the web.  "Big Web" simply uses the web as a transport medium where as "RESTful" services use it for publishing information.  The next is how much heterogeneity is needed.  "RESTful" systems rely solely on HTTP data types where are "Big Web" is more robust and allows for more customization to connect to legacy systems written in different language such as COBOL.  Another consideration is how loosely coupled should the interacting systems be, or the client to server relationship.  "RESTful" web relies heavily on HTTP connections, but if the web server is down then no transaction can be performed.  This can be overcome however with dynamic late binding, where the server is located at the time of the transaction.  "Big Web" however does not have either the time/availability issue or the location issue.  "RESTful" web services are better suited for basic, ad hoc integration, such as most things over the web, and are relatively inexpensive to develop since they use a light weight architecture. "Big Web" services are better suited for when quality of service really matters, i.e. enterprise application integration.  "RESTful" web services are also better suited for web applications since most clients can consume them letting the server side evolve and scale.

Well that was a quick and dirty overview of the two main types of web services available.  Please check out the links throughout for some additional reading and my sources.

Monday, October 24, 2011

Green Space: An Eco-Friendly Landscape Company

Green Space is a company that I created for our class midterm. It's a landscape firm that is centralized around the concept of maintaining a "green", environmentally conscious outdoor living space. I developed a website for this fictional company, and it's located here.

Amazon: Putting Your Head In The Clouds


This week in class we are watching this presentation by Dr. Werner Vogels, the Chief Technology Officer and Vice President of Amazon.



The Amazon Cloud was initially developed for Amazon's engineers.  Amazon's product teams work around the concept of "You Build It, You Run It."  After a while Amazon noticed that the productivity of it's teams was going down.  Amazon did a deep analysis of its product teams and found out that a lot of them were spending large amounts of time overseeing the management of their infrastructure, i.e. managing servers, data stores, etc.  That is when the idea of a cloud service was ignited.  Amazon needed to improve its IT infrastructure to better its business processes.

There were many considerations that Amazon reviewed when designing its cloud service.  First and foremost, they wanted to design it for flexibility.  They did not want to limit the innovation of their engineers so it needed to be extremely versatile.  It also needed to be an on-demand service.  Amazon had initially a few hours turn around time on provisioning new servers, but this was not going to be good enough so they developed a completely automated and scalable process for on-demand provisioning and de-provisioning.

Amazon foresaw this new service as becoming the next "utility" offering so they developed a utility-based pricing structure that's benefits them and their clients.  They also needed to break down the transparency of cloud computing to allow for latency management and compliance with local regulations to grow their client base.  Amazon knew from their retail business that they could not make this service a success alone.  They solicited support from an "associates" model and built APIs for companies to utilize and mutually benefit from.  These benefits and additional growth also helped Amazon create additional economies of scale allowing them to continually drive down pricing.  They also needed to look at what services would be universal for all its clients including low pricing, reliability, guaranteed performance, and security.  After all of these considerations were implemented, the Amazon Cloud Service was born.

To read more about the Amazon Cloud Service and its use in IT, check out these recent news articles:

Research and Markets: Cloud: Successful Strategies for Providing Services
Case study: Energy firm scales IT through Amazon Web Services cloud
Clouds vs Outsourcing



Thursday, September 29, 2011

What is a CMS?

By: Blake Haas, Andrew Kuehl, Rachel Moorehead, Berecia Stevens

Definition:
A system providing oversight to manage work flows in a collaborative environment.

Advantages:
  • Control - Content Ownership  and Accessibility 
  • Decrease Costs - Content Creation, Management, and Publishing 
  • Increase Revenues - Time Sensitive Opportunities and Fresh Content 
  • Improve Accountability - Audit Trail and Version Control
  • Maintain Consistency - Presentation Consistency and Brand Integrity 

Disadvantages: 

  • Contains hundreds of files
  • Limited Flexibility in Design 
  • Limited SEO of Web Pages 
  • SEO Maintenance 
  • Slow Loading 
  • Expensive Design 
  • Maintenance Costs

Key Features:
  • A Centralized Repository
  • Workflow Automation
  • Rapid Content Import
  • Dynamic Tracking and Alerts
  • Version Control
  • Automatic Distribution
  • Security

Choosing a CMS:
  • Scope
  • Type of Projects
  • Buy or Rent
  • Quick and easy installation
  • Simple administration interface
  • Quick and easy extension of CMS for extra functionality
  • Simple template manipulation
  • Helpful user community


Examples:

Friday, September 23, 2011

APIs Gaining Ground with Businesses

Most people by now in the Internet Technology field have at least heard the acronym API.  An API (Application Programming Interface) is used to transfer data from product to product or service to service.  It's used to allow developers to integrate your product's offerings into another product or service creating a continuous network of connections.  APIs are new cloud-based libraries in a sense and can help bring together your business's connection to the cloud, mobile devices, and social networks.  Sam Ramji from Apigee, a company that "develops API tools for developers who use APIs", gave a presentation on this topic that can be found here.  In his presentation, he talks about how APIs help businesses stay ahead of the fast changing markets and the continuously fractured interest groups they are trying to reach.



The ideas behind APIs and API development are openness and distributing data as quickly and efficiently as possible.  This comes with risks and control issues.  Businesses need to send out their typically legacy information for use by outside sources that might not be under their control to manage.  Business though can control abuse by setting rate limits for information transfer or by setting access restrictions.  Innotas, a cloud based PMO, developed their API offerings with Apigee.  Innotas was looking for a solution to get their customers consistent service delivery and to gather information on in-bound and out-bound traffic.  They approached the risk of control by implementing separate analytics and traffic control to manage data transfer.



The benefits of APIs though far out weigh the risks.  In the same sense that businesses can push information out, they can also draw information in.  This new stream of information can be used to adapt and innovate their current business offerings and its APIs and develop the business's model around its API offerings.  Innotas, in their implementation, provided operational and business-level visibility into their APIs.  This gave them access to customer usage reports, helped to provide the quality they were striving for, and stayed competitive over their on-premise counterparts.  In conclusion, I leave you with the words of Chris Anderson from Wired Magazine:  "The Web is Dead.  Long Live the Internet."

For more information:
Another presentation by Sam Ramji
http://www.slideshare.net/samramji/punctuated-equilibrium-celestial-navigation-and-ap-is
The API Evangelist
http://www.apievangelist.com/

Tuesday, September 6, 2011

Welcome to the Future

This week in class we are talking about the Future of the Internet.  In preparation for class, we are reading the UK Future Internet Strategy Group's "Future Internet Report - May 2011".  In this report, the group charged defines the future of the Internet, the opportunities and challenges faced moving forward toward this future, and what the UK can do to prepare for it.  The group defines the "Future of the Internet" in terms of three emerging components: combined service offerings, shared data amongst services, and a revamped network infrastructure.



Combined Service Offerings:

As the Internet grows and expands so grows the number and type of devices that are connected to it.  Sensors and mobile connected devices and items are becoming a more integral part of the Internet and the way it is utilized.  With so many connected devices, a new type of data is being collected:  sensor collected data.  In conjunction with human entered data, sensor and device data can be used to provide a better picture of the services utilized and customize your service offerings at lower costs at anytime.  Also with this increase of data comes the breaking down of barriers between systems to make the data needed readily accessible to the person who needs it and not stored in an isolated system.

Shared Data Management:

More companies than ever are collecting data about your web browsing and purchasing history.   They turn around and use that data to customize your experience with their services or sell it to other companies to do the same.  As more and more services combine, lots of data is being passed back and forth.  This can be good, and this can be bad.  One of the main concerns with storing all your data in a readily available packet of information is the security used to move the packet from place to place and the controls to determine who gets which piece of data.  How this process develops will be a big driving factor for what services can go to a cloud environment and which ones need to stay on-premise.  Another aspect of the shared data component includes centralizing access to ease service integration into an already developed access control system.



Revamping the Network:

With scenarios for communications (machine-to-machine, machine-to-man, and man-to-machine) increasing, the network that these communications move across needs to be growing and scalable too.  More and more of the data provided to end-users are large on-demand, real-time products, and the network needs to be able to handle that much information as quick as possible.  Also a large wireless network needs to be created and implemented to maintain an always-connected environment, but this also comes with challenges such as the handling of wireless spectrum interference and the need for broadly accepted IPv6 implementation.

I feel that this report accurately represents where the Internet is going: a centralized, cloud-based service provider.  I am excited about the progress being made, but I'm also hesitant about being "there" now as there is not a lot of legislation developed yet to consider the consumer's wants and needs versus the wants of the service provider.  This report, while mentioning a few big names in their case studies, also did not look at the big stake holders currently highly invested in the Internet such as Microsoft, Google, and Apple, and how their choices as companies would effect the Internet "market" and how it grows. 

The topics that we are covering in my MIST7500 class go hand-in-hand with this report.  We've so far reviewed cloud-based service offerings.  We've implemented many integrated services.  And coming up later in the semester, we will be looking at different networking technologies.  This class covers a good bit of the leading edge technology currently being developed and utilized while showing you where we've been too.

Where to next?


Monday, August 29, 2011

Exploration of Google Sites

Continuing our discovery of new SaaS offerings, I built a web prescience for my fictional infomediary, KnitWeek.

Please go check it out:
https://sites.google.com/site/knitweek/



I found that creating the Google site was fairly easy.  I did get a little confused on 'themes' versus 'layouts' versus 'templates'.  I also couldn't find where to change your template again once you get started, but that was okay in my experience.  I choose a blank template to begin with so that I could start with a clean slate.  Google Sites definitely has a lot of expandability; you can make your site as simple or as complicated as needed for your purposes.  There was not a great "Getting Started" section though other than the initial walk through of setting the appropriate site-wide settings such as title, layout, etc.



The actual design aspect wasn't exactly user friendly to begin with, but once I got started, the learning curve was fairly short.  I really enjoyed the readily available widgets and tie-ins to the other Google products such as Calendar and Forms.  However, I was displeased with the inventory of the widgets and the fact that most appeared to be user generated.  It was very hard to find the exact RSS feed reader I would like using the search, and I had to try out many before I found the right one that actually worked as documented.  The navigation was set up well though to move back and forth trying out new widgets.  It was also disappointing that there was not a "Create New" option when tying in with the other Google offerings.  It seems like a pretty easy option to implement that they choose not to.


Google Sites definitely fits in with our definition of the Software As A Service model.  It is on-demand access to webhosting, web design, and collaboration suite integration.  Also all the information posted and developed is stored centrally with Google in the cloud, making it accessible anywhere from the Internet.  Google's collaboration suite relates to the Service-Oriented Architecture concept in that it is composed of individual components that can be taken separately or in conjunction with each other in a multitude of combinations.  It also follows that each of these components is able to be accessed remotely as well.  Google's suite of resources is scalable, reusable, autonomous, and granular.

Software As A Service Comparison

This week in class we discussed cloud computing.  The first topic we are tackling in this discussion is SaaS (Software As A Service).  Here I am presenting my comparison of three SaaS competitors: Microsoft Office 365, Google Docs, and Zoho.


Microsoft Office 365 is a collaboration suite that allows you to go seamlessly from an online experience to an offline desktop experience.  It includes email, calendar, contacts, Office applications such as Word, Excel, PowerPoint, and OneNote, and Microsoft's Sharepoint and Lync products.  Microsoft has a great Getting Started website if you want to explore it some yourself.  I found the basic functionality such as composing an email, starting a new document, or setting up sharing on a calendar to be very easy to use and very intuitive.

I did not like however how much needed to be downloaded and installed before the integration with my computer was complete.  It required at least two separate downloads and a reboot.  Once installed though, the Office application suite on my computer integrated well with the online experience.  Another thing that I did not particularly find easy to use was the SharePoint component.  For those of you who are not familiar with SharePoint, it is a web collaborative space where you can publish websites, share documents, and set up forms.  However, this basic implementation of the product could use an overhaul on its flow from creating to designing to publishing.

This software suite definitely has its benefits for a business.  It includes all the major functionality any business would need, no matter the size.  It's scalable, and Microsoft has become a business standard.  There are numerous support mechanisms already available and documented, and Microsoft, itself, also offers more advanced support for a price.  Also Microsoft has developed its own mobile device technology, ActiveSync, which most mobile devices support, allowing the user to access email, calendar, and contacts from a mobile device as well.  All in all, Microsoft Office 365 is not a bad base level product for a business.


Another option available is through Google.  Google offers many online applications for various tasks.  For the purpose of this comparison, we'll be looking specifically their Google Docs offering.  It's very easy to get started with Google Docs.  There is not much of a "Getting Started" guide, but it's not really needed since the interface is very basic and simple, and there is contextual help available at all times.  You have five options for document types with Google Docs (Document, Presentation, Spreadsheet, Form, Drawing, and Collection [aka Folder]), which seems limited compared to the other offerings, but it's a strong set of tools in the toolbox.

There are a couple of drawbacks to using Google's suite though.  Firstly, there is a learning curve for the Microsoft or open source converts.  There is no 'Close' button; everything is auto-saved.  There are not tons of menus to navigate; all options that are available are shown, which  limits the options available.  Another drawback is that since Google is so massive it's offerings are segregated.  In summary, you'll have to go somewhere different to check your email and calendar from where you post your website from where you build and store your documents.

All in all, Google has a nice competitive offering for businesses.  Google is very aware of their customer base, and they strive to bring more to businesses and to implement on-demand product changes.  Google also has wonderful customer support options available, and it's been around long enough now that stability and quality of service over time are not concerns anymore as they once might have been.



Lastly, I'll be looking at the new kid on the block, Zoho, which came on the SaaS scene in 2007.  Zoho offers a completely online platform of applications to businesses.  It's 25 applications range from the basic email and calendar offerings to the more advanced and specific MarketPlace, BugTracker, and CRM applications.  Zoho works well as an application portal, a one-stop shop.  It's easy to get signed up and started with Zoho.  It also integrates with Google and Microsoft Sharepoint products.  Zoho also makes it extremely easy to use your own purchased domain name.

There are a few downsides to Zoho however.  It does not have complete feature parity with its competition in its basic Writer, Meeting, and Show products.  Also it's tabbing functionality can be confusing to a new user.  It also puts strict limits on storage space unless you purchase more, where it's competitors give their users a large free beginning space.  The product's home page is also extremely cluttered and hard to understand where to get started.

For business purposes, Zoho might be a little out of its league in terms of feature sets on its core functionality (creating, editing, publishing, email/calendar), but  it tries to make up for it with breadth of applications.  I imagine that Zoho will soon grow and develop into a great business solution especially since that is their primary customer base unlike Google, but for now, I think it might need to mature a little more as a service.  Also it's mobile capabilities need further development beyond a mobile version of their website.  A product set such as these really needs good mobile integration.

Tuesday, August 23, 2011

The Business Model Canvas: KnitWeek

For my second business model canvas, I am creating a made-up company called KnitWeek.  KnitWeek is an infomediary business that aggregates current deals and trends in the knitting industry and collects information from the knitting community that is then sold to crafting businesses.  KnitWeek also utilizes weekly email newsletters, social forums such as Ravelry, Facebook, and Twitter, and partners such as Michael's and Hobby Lobby to connect with the community and to harvest data.

The Business Model Canvas: Shear P'zazz

We've been learning about business model canvasing based off of The Business Model Generation by Alexander Osterwalder and YvesPigneur.

For my first business model, I'm going to model the business Shear P'zazz.  Shear P'zazz is a hair salon that I worked at throughout high school.

Routers and The Technology that Make Them Work

Morning Blogoverse!  For my first technical blog post, I'll be writing about Routers.  The website HowStuffWorks has a wonderful group called TechStuff.  This blog post is structured around their podcast titled "What is a router".  They also have great blog posts on their website as well about routers.  So please check those out for more information.



Internet traffic just like automotive traffic has rules.  These rules are called Protocols.  The Open Systems Interconnection protocol stack details out these rules as layers.  I'll mainly be discussing what goes on the 3rd layer of this 7 layered standard, most commonly known as the Network Layer.  For a more detailed description of OSI layers, please see this PDF on the standard, specifically page 47 for the network layer.

Routers are very multi-functional network-layer devices.  They control all communication between you and the Internet.  They scan all the traffic going through them and apply certain rules and regulations to that traffic.  Sometimes even denying or destroying traffic that tries to get through.  Routers are everywhere that a user connects to the Internet creating a global network of routers.

The traffic that flows through a router comes and goes in what is called a Packet.  A packet is also part of the standards dictated by the OSI model.  Packets include directions on its destination and how it fits with other packets to make a whole data file.  Routers then use this information to determine the fastest route the packet should take to its destination, which might not be the shortest route.  To do this, routers calculate the number of routers a packets goes through for a certain data exchange, called hops.  Certain protocols and packets limit the number of hops a packet can take, and the router must take this into account when determining its route.



Routers are in constant communication with each other using a different protocol called the Routing Information Protocol (to read more, please see the RFC posted here).   They do this to monitor and notify others when certain routers are overloaded or if there is a faster path a different way and assists them in balancing the load across the whole Internet.

Routers have a hardware and a software component and can be seen as a very specialized computer on the network.  Routers can also act as a switch.  To read more about switches, please see this post by HowStuffWorks called "How LAN Switches Work".  Switching allows multiple computers to talk to each other as well as the Internet.  Each device that is connected to the Internet must have an identification code called an IP (Internet Protocol) address that a router can send data packets to.  This must be a unique address or else traffic might be sent to the wrong location.  If every networked device had it's own IP address though we would run out very quickly with the advent of mobile devices and networked sensors such as security cameras.

To get around this, a router can do a few things:
  • First, it can limit who can connect to the Internet through it by restricting connections by a device's MAC address, a Media Access Control address, which can be found in the OSI model as well and is unique to each piece of hardware.
  • Secondly, it can act as a postmaster by assigning a non-unique, dynamic IP address to its connected devices and then by routing incoming and outgoing data packets using a smaller number of static, unique IP addresses when communicating with the Internet.  This process is called Network Address Translation.  It has its pros and cons and can slow down the transaction time if done improperly.  To read more about NAT, please see this great article.
There are some additional concerns to consider when looking at routers.

  • Whether you are using a wired connection or a wireless connection can make a big different in your experience.  Wired connections tend to be more reliable than wireless.  Also with wireless connections, you run into a higher probability of interference from other devices and also router/access point compatibility.  Wireless traffic uses the 802.11 protocol which has many channels such as a, b, g, and n, and if your access point does not use the same type of 802.11 channel it might not be able to connect.  You can read more about the 802.11 standard on the IEEE website.
  • Data packet delays might not be an issue with your router but may be an issue with your Internet Service Provider (ISP).  ISPs can put data caps on your usage based on their service level agreement with you to maintain a stable network for the rest of their customers.
  • People can try to hack into and attack routers just like they do with computers.  One of the most common examples of such an attack is called a Denial of Service attack.  This is where someone floods your router with so much traffic, legitimate or junk, that your router can't keep up so that everyone connected behind your router can no longer communicate with the Internet.  Another flavor of this type of attack is called a DDoS, a distributed denial of service attack, where a DoS is being performed by botnets so that they can not be easily distinguished.  Here is a great paper describing botnets and their usages.
  • Firewalls can also be applied on a router to assist in intrusion detection and handling and can assist in encrypting your data packets for additional security.


I hope this blog post has been informative.  If you find anything incorrect or would like clarification on anything, please post a comment, and I'll get right on it.  Thanks!




Friday, August 19, 2011

Welcome!

Welcome to my blog!  I am a graduate student in the Master of Internet Technology program out of the Terry College of Business at the University of Georgia.  This blog is for me to communicate what is going on in my MIST 7500 class, Introduction to Internet Technology, and to share any fun things I find along the way that I find relevant.  We are going to be talking about a widespread of things in class including the Internet (surprise!), networking, websites and HTML, and cloud computing.  Also since this is a management program, all of this will be with a business modeling, forward thinking, perspective.

To give you a little background, I received my undergraduate degree in Computer Engineering in 2007, but ever since then I've worked in Internet Technology (IT) and never really branched out into the wide world of computer engineering.  Since I've been in IT, I've done customer support, networking, web design, application design, and much more.  I am currently working at UGA in the Enterprise Information Technology Services department doing a wide range of things.  Also I like to dabble in terms of technology so I know a little about a lot of topics.  I am really looking forward to digging in and seeing what else is out there and honing my Internet Technology skills.

I hope you enjoy my blog!  Please feel free to post your comments!