We interrupt normal broadcasting for a thought bubble !

Good Morning.

 

I got into my car this morning as usual and at some point I pull up Spotify to play one of my own playlists.  This morning I was a victim of their own marketing initiatives.  Instead of my recent heavy rotation playlists being at the top there was this podcast link to Startup.  I read the blurb and it seemed interesting since I did have a stint as an entrepreneur in starting up a small retail/wholesale business when I finished university.  So I started the podcast and interestingly it was on the live journey of Gimlet.  This episode dealt with the run up to Spotify acquiring them.  One area of discussion that they mentioned was the cultural or managerial differences in running the business.  Matt was focused on strategy and running the company by logic and numbers.  Whilst, Alex was running the creative and content.  One comment stuck out that the word “feeling” came up.  In that if in business review meeting something was discussed that touched on the creative side then the word “feeling” kept coming up.  “That doesn’t feel right”  or that “feels like a good idea” whereas the numbers (sponsorship, subscriptions, revenue and growth) would be counter to what felt good

Like all good marriages there will be this counter balance.  But it got me to thinking that AI projects fundamentally need to align to the strategy of an organization: ( Cost/Value, Industry/Market differentiation etc). However if culture change is not incorporated into the project then we will hit a larger wall.  Organizational change management is always given lip service but when you deal with peoples “feelings” in business there has to be a means to accommodate and or incorporate that into to plan for AI to succeed.

I always reflect back to the fact that tip jars or charity jars in shops always do better if you put ‘googlly eyes’ on the jar.  The put a human face—or at least eyes—on something seems to help in interaction and success.  Robotics is the third arm of AI and we are seeing that if a human interaction element deals with perception, interaction and emotion it is more successful as well.

Bottom line of this thought bubble: plan for and incorporate the broad array of perceptions and emotions into the AI solution design and business case.

 

Now back to normal programming.

AI Use Case 1: Step 2 Future State of HR Hiring

 

So to summarize from the last post.  The first primary step in designing for AI projects is to align to the business strategy. As discussed earlier a specific use case is needed. As well as understanding current state processes, capabilities and outcomes.  Now that we understand the state of the organization and the intended high level outcome. We get into to aligning AI capabilities that could best address the desired outcome.

A US National law firm in leveraging prediction ML models could apply it to these processes:

The hiring process. By improving candidate identification which would then reduce hiring costs and time. HR knows all the variables of good candidates; but cannot do it by hand today in the time needed. Having an ML Model improve candidate identification would help the HR team.

Predicting attrition factors of a hired partner would be another process.  Early intervention would reduce risk and cost associated with losing a law partner.   The ML model could analyze more data points for each partner faster than HR.

Skills management would be another process to automate. Identifying skill gaps by combining performance reviews and other sources of data would benefit each lawyer’s assessment.   Since more data points would provide better insight into where a lawyer would need training. Which would support the market strategy of industry practices and expertise.

Improved HR performance reviews would be another area that would enjoy ML models. Analyzing more factors of a lawyer’s activity; versus semi-annual reviews, would give better insight. Documentaton: time, court time, legal preparation time as well as HR data provide insight into a lawyers performance.

Setting a vision or desired future

The cost to hire, train and keep a skilled lawyer can be high. The efficincies gained by greater insight into lawyer HR data will reduce overhead costs. ML models would predict candidates potential of attrition and skill level matching in a more automated fashion.

Machine Learning will only improve HR assessment and feedback. As well as; identifying early intervention on job dissatisfaction. The law firm would be able to improve it’s work place ratings on a national basis. Thereby; making a more desirable location to work and providing differentiation in the market place (another Porter strategy dynamic).

When the firm is able to hire the best resources at a lower cost. The market focus that they provide will only improve and provide clear competitive differentiation. Which is in alignment to their market strategy of industry focused teams. 

Current state is manual based. And on based on human experience only.  With the ability to increase data points and measurement the combination of ML and HR resources would create an increased collective intelligence. This is in line with the three market factors of Porter: Cost, Differentiation and Focus for the company.

Technical and leadership and managerial requirements

In order for Machine Learning to provide the most benefit requires first. Subject matter experts to assist in the development of the AI project. Thus, Practice Leads interviewing staff, HR leads, existing BI, and tech resources need to take part.   All these roles will need to engage with the development of the model and the testing and training of the model.  There has to be executive sponsorship in order for this to succeed. This sponsorship’s role is to communicate and ensure the project stays on track and schedule but also be seen as involved. 

Technical requirements would ensure that the sources of data such as: HR and Case Management data could be accessed and offloaded to “parameterize” the data sets. The “parameterizing” of the Machine Learning model is critical the success of AI.   Natural Language Processing will extract data from CV and other free-form text sources of information.

Next instalment will look more closely at the NLP aspect of this use case. As well we will look at the need of a “domain” or corpus of knowledge that is needed for the AI to be of any value.

Additional technical requirement will be.  What does the organization have currently.  Does the tech staff have the appropriate training for; BI tools, Predictive Analytics, Data Integration tools.  Another consideration are; can this be done in the cloud or does it need to be done on premise due the sensitive nature of data sets.  What are the security and privacy capabilities of the ML technology? Has an ethical review been done on types of information and sourcing to minimize risk ?  These questions need to be addressed by all of the team members.

AI and Ethics

IMG 1954

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

On a recent trip to New Orleans for work, we had the privilege of touring the location that houses and builds the floats for the Mardi Gras parade.  I took quite a few photographs but this grouping of money bags and Elvis got me thinking about ethics and AI.

You will be hearing more and more of this issue.  There are many who do not understand AI ( ML, NLP and RPA ) and are therefore concerned about what it will and will not do.

I had the benefit of taking a course on Ethics in Data Science as well as taking a course through MIT Sloan on Business strategy and AI ( I’m half way through so wish me luck ).

There are many factors for success or failure.  I think one key factor is the ability to bridge the gap between technology and business units.  

“Explainablity” is something that you will hear a lot about – the ability to clearly explain what AI is doing and how it came to its outcome.  That but also to clearly show there is no ossification of the system in that a hiring system is not hard wired the hiring practices of the the organization.  Or that young people are being denied loans at a  banking system due the the fact that the model is biased towards older applicants and that it was never given data sets for a population below the age of 35.

Ethics and AI will become more important to the point that we are now starting to talk about regulation and compliance to ensure good use of AI vs uncontrolled use of information and models.

The coin is in the air and flipping I wonder which way it will land ?

Maybe I should develop an AI Maturity Model assessment toolkit, I did this for Case Management systems and Cognitive Computing so I should be able to do it AI Maturity.  And in doing so Ethics and Transparency will have to play a key role.

 

 

 

 

 

 

 

 

 

Executive Guide to Cognitive Computing Part 5

Recommended Strategic Planning and Considerations for Cognitive Computing Projects

In order to proceed one must start to define the use case (business problem) and more importantly what is the question or questions that need to be answered.  As mentioned earlier cognitive computing is a new focus on knowledge or information automation vs. Process automation that we are familiar with in traditional technology systems.

Defining the intention of the project?

This may seem apparent but organization can get caught up in the excitement surrounding cognitive computing and lose sight of what type of problem needs to be solved or resolved.

1. Type of questions/information to be asked or analyzed?

2. What type of dialog/conversation to be supported?

3. Integration with pre-existing systems?

4. Number of potential users?

5. Corpora or Corpus Size (Type of Data and # of Documents) and level of complexity ?

Defining the Objective

Having a clear idea of the outcome of the project is the next step.  This ensures that the proper cognitive technology is applied.  Are you analyzing text? Are you trying to assist a call centre agent   Are you attempting to automate a question and answer conversation?  The objective must be clear:

  • What type of problem is trying to solve?
  • Who will be the user?
  • Are there multiple types of users? and what are their expectations?
  • What issues will your users be interested in?
  • What do they need to know?
  • Will they need to answer questions like How and Why?
  • What is the objective based on knowledge and data? (vs. process)
  • Type of knowledge that will be pivotal to corpus? (segment of domain or business/industry)
  • Will the system be needed to provide assistance? (Citizen or Agent)

 

Defining the Domain to analyze:

  • Helps to identity data sources as well as SMEs that will need to be involved
  • Can the objective narrow the domain focus?
  • Is there domain taxonomies, ontologies and catalogues?
  • Have you identified additional data sources not typically associated with solving problems in that domain? (i.e. learned by experience)

Recommended Next Steps on your Cognitive Computing Journey?

Assess the Cognitive Computing Maturity of the organization

Knowing where you are today vs. where you want the organization to be in the future is critical.  Identifying gaps and priorities in how and where Cognitive Computing can be used provides a clear idea on what you have invested in the past and what you will need to invest in the future.  A Cognitive Computing maturity assessment is a quick way to start to understand the level of effort required.  As shown in the illustration it can quickly guide decisions in planning and investment in this new technology.

 Cognitive MM

Figure 2: Cognitive Computing Maturity Model Assessment Example

Engage with IT and IM

Since cognitive systems will rely on accessing more forms of information: text, pictures, voice, sensor, geo-special and traditional sources; business and IT/IM must work together.  Although business may lead the initiative where all that information will be coming from will require a broader more cross-functional team to help develop domains of knowledge.  Include organizations that manage the ECM environments as well as other forms of information.  Understand where Open Data is coming from and what are the governance issues associated with combining many forms of information together.  With IT/IM and the lines of business working together the outcome of any project will be more successful.  However; it must be clear who is the ultimate owner of the project.

Cultural and Organization Readiness

Cognitive like Analytics or Big Data require a shift in the organization culture.  Once groups and individuals learn that they can ask complicated questions and get answers in a shorter period of time then receptivity to a new innovate technology will be positive.

Never belittle the importance of organizational change and fundamental training for the new projects or programs.  The organizational change management plan must be aligned to the domain and objective of the cognitive project.

Prioritize on Business Problem to resolve

Cognitive systems are predicated on solving specific problem.  It is critical that a clear business problem is identified.  Typically, it starts with a question or area that needs to be better understood.  Traditional approaches of IT defining the solution for the problem may not necessarily work.  What is needed is a cross-functional planning team that sees the business problem to be solved from many directions.  This team needs to have executive sponsorship and participation as well as multiple lines of business.  This is due to the fact that cognitive systems rely on more sources of information that multiple lines of business may have.  IT will be required to prioritize and plan on accessing the sources of information (data sets).

Identify Key Questions to answer (How? and Why?)

Traditional analytics technologies have been able to answer questions like What? Where? and When?  However, cognitive systems can answer those type of questions as well as How? And Why?  In order to answer those questions all there has to be a coordinated plan and team that can help to define the questions then start to assess where the information is in order to answer those questions.

Rapid Proof of Technology Exercise

In order to quickly assess the capability of Cognitive Computing a proof of technology session should be planned.  This encompasses a limited scope workshop that tests the viability of answering a question.  It is strongly recommended that this approach is done since it addresses all the points raised above.

Skill Development

Lastly, since Cognitive Systems require a new method of programming (algorithmic) skills may have to be developed and enhanced within the IT organization as well as the lines of business.  This ties in with cultural change.  The dynamics of Cognitive Systems are different from traditional IT systems and project planning.  Without proper skill development any system will not succeed.

Executive Guide to Cognitive Computing Part 4

Five Dimensions that Cognitive Computing will evolve in Public Sector programs

As Cognitive Systems get deployed into Public Sector programs they will continue to evolve over time due to the nature of algorithmic programming and natural language processing of information.  Cognitive systems are dependent upon a feedback loop and as such these dimensions will have an impact on not only the technology but the programs or projects they support.  This evolution is predicated on a few dimensions that affect perception and adoption.

 Cognitive Evolution

Figure 1: Five Dimensions of how Cognitive Computing will evolve over time

 

Cognitive Computing will evolve over 5 dimensions that span both technology and cultural aspects of a public sector program.

Personalized Interactions

Given the ability for cognitive to interact via natural language and learn from those interactions each business problem may require varying levels of interaction and also the level of personalization.  Therefore, a social service benefit self-serve application will need to have a much more intimate understanding of the citizen.  Each person will need to be better understood on more dimensions of interaction: access rights, location, personality, tone, sentiment, log history etc.  All of these dimensions will provide a more satisfying interaction and outcome regardless of the user type.

Learning

As has been explained cognitive systems relying heavily on machine learning.  Machine learning algorithms can be supervised or unsupervised.  It will come down to what level of complexity and knowledge the algorithms has of a specific domain.  Some applications will constantly need the input of a human subject matter expert in order learn whereas other systems will continue to enhance themselves through automated feedback loops.

Recommendation: Develop a culture where analysis and question asking is supporting in that cognitive systems will aid in decision making outcomes.  Also the need to have SME within a domain or program area to participate in helping the cognitive systems learn.  This will impact workforce dynamics and must be positioned correctly so that users do need feel threatened by the new systems being developed.

Sensing

Since Cognitive system analyze larger data sets and require more dimensions of data one path that cognitive systems will evolve around will be the number of data sets used to answer questions or make decisions upon.  One will see the expansion of various sources of information types to ‘sense’ and decide upon.  The Internet of Things, Dirty Data, Big Data, Open Data, Geo-Spatial and Social Media information sources provide greater contextual understanding for cognitive systems to integrate with and additionally enhance their analysis capability.

Recommendations: Policy changes will be challenged to address the evolution of information access in accordance with public sector regulations and compliance.  Planning and strategy will be required in order for this evolution to occur

Ubiquity

Public Sector workers are younger and are technically ‘rich’ in their personal lives.  The expectation of technology use will increase exponentially as the public sector workforce changes.  The need to embed cognitive systems in how people work and where they work regardless of device or location will need to be planned for.

Recommendations: Since cognitive systems focus on information automation vs. process automation and the information can be presented or integrated in any form the ubiquity of the interplay between systems and users can be supported to meet the demand of the new public sector workforce.

Scalability

The ability to interact with government workers or citizens will continue to enhanced with the continuing development of natural language and conversation algorithms which will ensure that the interaction between user and technology becomes easier over time.

Cognitive systems continue to enhance themselves through artificial intelligence and machine learning.  The evolution of feedback loops and deep neural networks will ensure that developed algorithms will be enhanced in a more automatic fashion that the system can truly learn how to interact and better respond with increased levels of confidence and information.

Cognitive systems are really a culmination of existing systems and algorithmic programming as well as the need to incorporate more and larger data sets such as geospatial, weather, social media and internet of things data (IoT).

Recommendations:

Public Sector leaders must plan for the fact that in order to scale systems one must be reliant on technology on premise as well as in the Cloud or with a Platform as a Service (PaaS). 

How do I use Cognitive Computing and for what benefit?

To truly understand what and how cognitive systems will benefit your organization it is best to see how other public sector programs are starting to use cognitive machine learning and natural language processing to enhance programs.

Tax

A large taxation department is using advanced methods of natural language processing to analyze structured information (SWIFT Transactions) and unstructured information such as social media and addresses to investigate and analyze off-shore financial transactions.

Health Agency

A national health agency that is tasked with researching and assessing immunization products for the country they serve are looking at the possibility of what natural language processing and text analytics will allow them to do.  Instead of manually reading thousands of medical journals and research documents by hand; which currently takes 10 months they hope to be able to analyze and extract insight within a shorter period of time thereby getting more effective immunization treatments to the populace in a shorter period of time.

Law Enforcement

Investigations into major crimes and drug gang activity is being enhanced by combining many sources of information together and allowing investigators to ask in natural language who someone is or even where they are.   This information is then fed into visualization tools to better see how individuals and organizations are linked.  Advance analysis tools are being used to image analysis and extraction of meaningful evidence that could only have been done by a human before.  This allows a larger body of evidence to be gathered and because it is automated the change of evidence or forensics can be maintained as it is handed over to prosecution.

Challenges and Opportunities for Cognitive Computing in Public Sector

Challenges to Cognitive Computing in Public Sector

The challenge in today’s world of improving or innovating government programs is that we have a broad array of information and process automation to co-ordinate.   Another major challenge of being in a data driven world is that information can be wrong, false, incorrect, out of date or inaccessible.  Cognitive Computing with its ability to apply algorithmic programming allows advanced patterns to be identified out of a much larger group data sets; which allows us to reduce the “noise” associated with making decisions and program outcomes.

There is also a need for evidence based decision making; which needs to follow a prescribed methodology.  As well as the need to analyze larger bodies of knowledge and information.  Traditional rules and equation based programming cannot manage or interact with a human in natural language.  Cognitive Computing interacts with a government worker with natural language and with the ability to learn and enhance the algorithms needed to find and test hypothesis or questions.

Government must ensure information is secured and managed effective.  One of the challenges of moving to cloud based computing or sharing data sets of information across government raises the issue of how far can the data or the information move from where is was created.  Information provenance and governance practices must be in place but the need for private cloud or platform as a service cognitive computing service catalogs are needed to ensure the data is kept within the boundaries of how it is to be governed and managed.  Data residency is another issue associated with using cloud services or cloud computing; however most major providers of cloud services have data centres within the country thereby offsetting issues associated with Data Residency.

Recommendations:

Most governments around the world today have a shared services model for core ICT and Enterprise applications support.  We are seeing that government are now looking at cloud brokerage services being managed within the government which deals with Data Gravity issues.  And based on the nature of the API Economy we see that PaaS (Platform as a Service) are now being investigated and tested.   Therefore, we see central shared services agencies being the agent of change and will look to them to deploy Cognitive Computing PaaS as a service catalog that other government agencies and projects can leverage which will then ensure information is secure and protected depending on the type of information.

Opportunity to Innovate in Government Programs

Due to ability of cognitive computing to identify patterns or information at high speed and with large sets of information the opportunities in government are broad.  Since all information sources are able to be analyzed and combined (Databases and text etc.) a more complete picture is provided to an individual to make decisions.

Any area within a government program that houses a large set of information relevant to a specific domain: benefits, policy, regulations etc. Would benefit from cognitive computing since this information can be analyzed as well as added to a corpus of knowledge that the machine learning algorithms can access and analyze across dimensions such as time or relevance etc.

Six forces that will impact the future evolution of cognitive computing in Public Sector. 

Each facet has its own issues and challenges for this technology to be adopted.

Society

  • Tremendous demand for more intelligent machines and access through mobile devices can facilitate familiarity and comfort with technology
  • Fears of privacy breaches and machines taking human jobs could be a deterrent

Perception

  • Perceptions and expectations must be well managed
  • Unrealistic perceptions of risk and expectations could lead to a third “Artificial Intelligence AI Winter”

Policy

  • Wider adoption will require the modifying policies (e.g., data sharing) and creating new policies (e.g., decision traceability)
  • Fear, uncertainty and doubt may be addressed by new policies (e.g., data security & privacy)

Technology

  • Advanced, intelligent devices will enable a greater understanding of entity context and contribute to the robustness of available information corpora
  • Greater scalability needs drive new architectures and paradigms

Information

  • Variety and scalability capabilities of future systems will advance rapidly to cope with information exhaust
  • Information explosion could advance evolution and adoption rates

Skills

  • Cognitive computing demands unique skills such as natural language processing, machine learning
  • Greater availability of key skills will be key in the evolution and adoption of the capability

Recommendations: One must co-ordinate a strategy that revolves around the areas discussed above.  The fundamental challenges similar to cloud based computing in pubic sector will be policy and cultural change that needs to be managed in order for the information and technology to develop.

What is Cognitive Computing and Why Should Program Executive Care ?

What makes up Cognitive Computing

Cognitive Computing is comprised of three main functional areas.  Which are: natural language processing, machine learning and hypotheses testing.  All three functions of cognitive computing combine to provide greater flexibility.  This helps address a broader array of business problems in public sector. Business problems that could not have been solved earlier.  Natural Language Processing enables machine learning and discovery algorithms to interact with the end user in a more meaningful way.

Natural Language Processing (NLP)

            NLP describes a set of linguistic, statistical, and machine learning techniques that allow text to be analyzed. Which allows key information extraction for business value.  Natural Language analysis uses a pipeline processing approach.  Wherein the question or text is broken apart by algorithms.  So that the structure, intent, tone etc. is understood.  Specific domains of knowledge; such as legal, finance or social services, require targetted “dictionaries” or filters.  This helps to further improve the ability of the technology to understand what is being asked of it.

Some of the key benefits of NLP is it improves the interaction between human and systems.  Some additional benefits from NLP are as follows:

A questions contextual understanding can be derived from NLP. IT organizations develop a meta-data (information about information) strategy.  This gives more context to data and information sources.  The more meta-data and the more context added to a system; the better the understanding. This allows the improvement of finding information and then providing an answer back in natural language.  Instead of a page ranking result one would get from a typical search engine the response is in a form the user will understand.

The intent of a question is then better understood. Which means the cognitive system can better respond with a more meaningful response.  As well as return various responses with an associated confidence level.  This then gives the end user a more meaningful response with which to make a decision upon.

Natural Language Processing has taken interaction and access to information to a whole new level.  Which will in turn provide increased productivity and satisfaction to the end user.

Machine Learning

Machine Learning is all about using algorithms to help streamline organization, prediction and pattern recognition.   Big Data by itself can be daunting and only data scientists can build and interpret analysis by incorporating machine learning and natural language processing Big Data can only benefit from being easier to interpret by a broader user group.  Part of Machine Learnings “secret sauce” is deep neural networks to do information pattern analysis.

Deep neural networks, due to their multi-dimensional view can learn from regularities across layers of information which allows the machine learning algorithm to self-enhance its model and analysis parameters. This capability takes the onus away from the end user or data scientist to mine information from multiple sources.

The benefit to public sector programs is that deep domain knowledge will not be needed by the end user or the citizenry but have the machine learning algorithms to the heavy lifting and analysis for them.   

Historically organizations had to depend on limited ways to analyze and report on information which to a certain degree limited decision making and program outcomes.  Now with machine learning a key benefit is to access these systems with natural language or extremely flexible visualization tools which make decision making easier and more productive.  Since Cognitive systems are about knowledge automation vs. process automation.

Hypotheses Testing

A hypothesis is a proposed answer to pre-existing or understood responses.  From there a cognitive application will use the information that resides within a certain corpus or domain of knowledge to test the hypothesis.  Unlike humans who typically test hypothesis in a serial fashion one of the key benefits of a cognitive system is that it can test hundreds of hypothesis in parallel.  We see this occurring in areas such as health care or intelligence where various proposed outcomes are tested against a domain of knowledge.  The domain of knowledge can be comprised of many different sources and types of information. Given that cognitive systems have the ability to test large volumes of hypothesis at a high volume.  Programs and applications can benefit from the ability to provide improved means to make decisions with confidence and also to remove the “noise” surrounding what is trying to be resolved.  Some benefits from hypothesis testing are:

Causal Induction provides a key benefit to the user since it is based on statistical models deriving insight from a corpus or domain of knowledge.  As these models become more refined the ability to derive insightful responses provide more meaningful interactions with the end user or citizen.

Probabilistic Reasoning can generate multiple responses to a question which provides the user to see all aspects of an outcome versus generating a specific bias to the problem at hand.  This predicated on the system having enough context and also arriving at a specific level of confidence to provide an answer to the question.  As systems learn through interaction and feedback they will be able to identify if information is missing in order to provide an answer which again enhances the decision making process of a project or program

In summary; Cognitive Systems combine natural language processing with advanced algorithms and modelling tools to aid workers to make decisions in a shorter period of time and/or to provide more meaningful insight to a larger domain/corpus of information which the end user would never have been able to access or analyze prior to Cognitive Computing technologies

An Executives Guide to Cognitive Computing Part 1

What you need to know about Cognitive Computing Part 1

In the coming weeks I will lay out an explanation and recommendation on what and how Cognitive Computing can be used.  As always I look forward to comments and thoughts raised.

Technology continues to develop and improve in how we interact with systems and people.  Over the past decade there has been radical developments in how computers make sense of:

·      Text

·      Voice

·      Pictures

Given these capabilities; what will this mean for your agency or program?  The possibilities are endless when one considers the value of integrating text, voice and pictures into the decision making process.   The purpose of the discussion is to investigate the “art of the possible” and to provide an overview of Cognitive Computing. We will conclude with a focus on approaches and recommended next steps.

This new shift in technology provides the ability to automate knowledge and decision making in a more meaningful way.  The challenge of policy change needs to occur to reap the benefits of this new technology.  Cognitive Computing will face some challenges over the next few years.  It will evolve over time as it used by public sector organizations.

Public Sector organizations around the world have too much information to analyze. Traditional Information Technology (IT) systems cannot cope with variety or volume.  Each department or program needs access to more sources of information. The more “dimensions” of information accessed (e.g. geo-spatial, social media, weather data) the better the outcome.

In today’s world employees and programs are asking harder questions through traditional means. As an example; one government agency is trying to analyze the dynamics of the underground economy.  A state government is attempting to understand why municipalities dissolve or succeed.  It is important to apply critical thinking principles via cognitive computing.   There is a need to eliminate the “noise” that exists in today’s information driven society.  Organizations can now use machine learning and hypothesis testing to apply critical thinking against larger sources of all types of information.

Recommendations of this discussion will focus on ensuring a clear objective. And, that the domain of knowledge is complete and defined.  This ensures that the cognitive systems will work successfully to solve complex business problems.

Thoughts ?

Transformation and Innovation in Public Sector has new hope

I have seen many “transformations” throughout my work with governments: from punch-card Fortran programming to client-server computing, to web, to mobile and social media. Almost every presentation I have seen over the past twenty years speaks to the government’s need to transform: to do more with less and to innovate. So far there has been more of an evolution than a transformation in government IT. However, with new initiatives such as“Bring your own device” (BYOD), Cloud,“The Internet of Things”, “Appification”, Anti-“Skumorphism” User Interface Design(The word exists!), Big Data, Analytics and Social and Mobile, there is a growing shift toward transformation within government technology.

Specifically, there is a changing culture due to new possibilities for government workers to leverage technology of their own.This shift is driven by the changing spectrum of user expectations and the free availability of information.Let us take me as an example. I am my very own IT department, complete with location services, rules, process automation, the internet of things, cloud, big data and a large supply of apps at my beck and call. I can use IFTTT (If This Then This) for simple rules to control the light bulbs and thermostats in my house. The power I have as an end user is boggling. What can we take away from all this? IT and Central CIOs are now using this same technology and brainstorming about potential applications of similar techniques to meet end users expectations and demands. These thought leaders are pushing the government to empower end users in the same ways that these users are being empowered in their personal lives.

Despite movement toward transformation in Government, I hesitate to get too enthusiastic. Major road blocks to such progress still remain. Chiefly, procurement: due to the siloed government approach to selecting vendors there is limited ability for transformative technology use.

But don’t give up hope.  Together, we can work to develop creative and strong business cases and procurement vehicles that can bring in transformative solutions. To push Transformation in Government forward we must have government IT and line of business execs motivated and educated in order to advocate for better and more dynamic procurement policies that can pave way for the development of truly transformative solutions.

However; in my previous post of the infographic on Contextual Computing I believe since policies will be challenged in order to deploy that type of computing technology that in turn procurement will be challenged as well.  Here is the executive report on the study done by IBM on Contextual Computing

Empowering governments through contextual computing Exec Report.pdf

Contextual Computing

Contextual Computing unlocking the power of enterprise data Infographic

%d bloggers like this: