Tuesday, September 25, 2018

Ask not what AI can do for Data Virtualization. Ask what Data Virtualization can do for AI.


Artificial Intelligence (AI) can perform tasks that are considered “smart. Combining AI with machine learning, it reduces programming and algorithm learning. AI has been exploding recently and taken a big leap in the last few years.

But what about Data Virtualization (DV) with AI? The first thought is usually AI optimizing queries on virtual data models. However, how can DV help AI? 

Why not leverage the virtues of Data Virtualization to streamline the AI data pipeline and process?

Example 1: Expedite the Machine Learning process when you need data from multiple sources

Suppose you want to train your Machine Learning (ML) scenario with composite data sets? Federating disparate data “on-the-fly” is a core competency of Data virtualization. Without data virtualization, of course, you could always add custom code here and there to do the federating and cleansing, but that complicates and certainly slows down the configuration of the AI as well as its execution. Even with huge amounts of data to deal with, Data Virtualization can quickly align and produce appropriate data for ML without incurring the time and risk of hand coding.

 Example 2: Define and Implement Algorithms

 With Data Virtualization a model or database is driving the iterations, DV secure write-back capability can continuously update the data sources to reflect the latest optimum values in real time.  

The learning occurs with many iterations through the logical model, usually with a huge data set. Each iteration brings more data (information) that is then used to adjust and fine tune the result set. That newest result becomes the source values of the next iteration, eventually converging on an acceptable solution.  That processing loop may be a set of algorithms that are defined in an automated workflow to calculate multiple steps and decisions. The workflow is configured in the Enterprise Enabler Process engine, eliminating the need for programming.

Example 3: DV + AI + Data Lake
Many companies are choosing to store their massive Big Data in Data Lakes. A data lake is usually treated as a dumping zone to catch any data (To learn more read - 5 Reasons to leverage EE BigDataNOW™ for your Big DataChallenges). Architects are still experimenting as to the best way to handle this paradigm, but the mentality is, “If you think we’ll be able to use it one day, throw it in.” With Data Virtualization, information can be fed in with a logical connection and can be defined across multiple data sources.



Enterprise Enabler® (EE) goes much further since it is not designed solely for Data Virtualization. The discovery features of EE can help after the fact to determine what’s been thrown in. EE can ensure that the data cleansing and management process executes flawlessly. Agile ETL™ moves data anywhere it is needed, without staging.

Since EE is 100% metadata-driven and fully extensible, it can morph to handle any integration and data management task that is needed. Enterprise Enabler is a single, secure platform that has pro-active configuration suggestions, maintains user access controls, versioning, monitoring, and logs.

From data access to product overview to data prep for business orders and reporting Enterprise Enabler is the single data integration platform that supports it all.



Friday, September 14, 2018

#FBF Stone Bond Technologies Summer Internship


In honor of #FBF and school being back in full swing, we've asked some of our summer interns to share first-hand their perspective on Stone Bond Technologies internship program. 

My time at Stone Bond Technologies was invaluable to me. The most beneficial aspect of my internship was the opportunity to experience firsthand how a company operates on a daily basis. With three years of computer science under my belt, this internship has been more helpful than many of my classes in preparing me for a career. I began the summer by working to update the software available in the Amazon Marketplace.  I had never done anything like this before, but with the help of my coworkers, I was able to figure it out. After this, I began getting familiar with the code behind the software. This prepared me for my final task of researching machine learning and trying to figure out how to improve it in the software. This was an exciting challenge because I had never worked with machine learning in school. I am incredibly grateful for the real world experience and opportunity to learn new areas of computer science that I could not have received in school.
– Chance Morris, Student at University of Houston 

Interning at Stonebond Technologies is what most dream of a warm, supportive and open environment that not only works to highlight your skills but encourages the development of new ones. My experience interning at this company was truly unique. I got to work directly with the software and was always pushed to try harder and do better. They set high goals, and even if they were not met, the employees at Stone Bond Technologies made sure you landed amongst the stars. Every day was an opportunity to learn, with every question (of which there were many) was answered promptly and thoroughly.

I truly loved being able to work on multiple areas such as AI, data virtualization, and quality assurance. I was able to gain experience in SQL, Postman, TFS, and so much more. Working here is a reminder that good work is genuinely about consistency and the willingness to learn and improve oneself. The skills that I learned, both technology wise and corporate wise, are indispensable and for that I am grateful. Stone Bond Technologies is a small company with a true soul that cares about its employees and its customer's best interests.
– Gabriela Marin, Student at University of Houston 

Go Coogs! Wishing all students all the best this year!! – SBT Team

Monday, April 30, 2018

Why Automating access to your critical Data is Important to you.

When we talk about data accessibility in terms of critical thinking, a question we often ask our clients is: Why do you think access to your critical data is important?  The answer is rarely obvious and most often that the client typically doesn’t even have access to critical data, or else doesn’t have access to it in time to take action.

At this point, many solution providers will immediately jump to the “How” of their solution, and dive into all the cool features and bells and whistles, all without fully understanding why accessing data efficiently is important to their client, much less what decision the Client needs it for.   Eliciting this answer informs your Client’s most important and relevant use cases around which a new approach makes perfect sense.  So let’s give it some context before we dive into the “how.”

Accessing your data efficiently is about more than just finding and manipulating data from a single source or even heterogeneous platforms. Decision makers need to access the data to develop and deploy business strategies.  Even more importantly, we need to access critical data in time to take tactical action when warranted. The ability to see your business data in real-time also lets you check on the progress of a decision.

Data Virtualization is one solution to get at your critical data in real-time, even if it is scattered across multiple sources and platforms.

The first thing you need to know as a Client is that you won’t have to recreate the wheel and replace one cumbersome approach with different cumbersome approaches. 

Connectors, connectors, connectors.

Typical elements of a useful Data Virtualization paradigm are up-to-date “Out of the Box” data connectors, platform specific protocols, useful API protocols and API-specific coding languages and thought to whether modified data would be written back to a source or new data-set. 

An example of the latter would be updating your balance in a Customer Care web application after making a payment online. In this example, the following is assumed:

  • There are 5 offices across the world. They all speak different languages and have different platform protocols
  • Currently data is scattered in various sources (i.e. cloud, excel, and data marts)
  • It takes too much time to combine the scattered data sources into one report. Transformation is a long process (with many steps) that turn raw data into end-user-ready data (report-ready for short).


With the right Data Virtualization tool, or Data Integration platform, you can consume data in real time.  Without waiting for IT, without making copies, and without physically moving the copied data.



Since Data Virtualization reduces complexity, reduces spend on getting at and using your data, reduces the lag from requesting data to using data, reduces risk from moving and using stale data, and increases your efficiency as a decision maker, data virtualization is almost certainly the path to take here.






This is a guest blog post and the author is annonymous.