Tuesday, September 25, 2018

Ask not what AI can do for Data Virtualization. Ask what Data Virtualization can do for AI.


Artificial Intelligence (AI) can perform tasks that are considered “smart. Combining AI with machine learning, it reduces programming and algorithm learning. AI has been exploding recently and taken a big leap in the last few years.

But what about Data Virtualization (DV) with AI? The first thought is usually AI optimizing queries on virtual data models. However, how can DV help AI? 

Why not leverage the virtues of Data Virtualization to streamline the AI data pipeline and process?

Example 1: Expedite the Machine Learning process when you need data from multiple sources

Suppose you want to train your Machine Learning (ML) scenario with composite data sets? Federating disparate data “on-the-fly” is a core competency of Data virtualization. Without data virtualization, of course, you could always add custom code here and there to do the federating and cleansing, but that complicates and certainly slows down the configuration of the AI as well as its execution. Even with huge amounts of data to deal with, Data Virtualization can quickly align and produce appropriate data for ML without incurring the time and risk of hand coding.

 Example 2: Define and Implement Algorithms

 With Data Virtualization a model or database is driving the iterations, DV secure write-back capability can continuously update the data sources to reflect the latest optimum values in real time.  

The learning occurs with many iterations through the logical model, usually with a huge data set. Each iteration brings more data (information) that is then used to adjust and fine tune the result set. That newest result becomes the source values of the next iteration, eventually converging on an acceptable solution.  That processing loop may be a set of algorithms that are defined in an automated workflow to calculate multiple steps and decisions. The workflow is configured in the Enterprise Enabler Process engine, eliminating the need for programming.

Example 3: DV + AI + Data Lake
Many companies are choosing to store their massive Big Data in Data Lakes. A data lake is usually treated as a dumping zone to catch any data (To learn more read - 5 Reasons to leverage EE BigDataNOW™ for your Big DataChallenges). Architects are still experimenting as to the best way to handle this paradigm, but the mentality is, “If you think we’ll be able to use it one day, throw it in.” With Data Virtualization, information can be fed in with a logical connection and can be defined across multiple data sources.



Enterprise Enabler® (EE) goes much further since it is not designed solely for Data Virtualization. The discovery features of EE can help after the fact to determine what’s been thrown in. EE can ensure that the data cleansing and management process executes flawlessly. Agile ETL™ moves data anywhere it is needed, without staging.

Since EE is 100% metadata-driven and fully extensible, it can morph to handle any integration and data management task that is needed. Enterprise Enabler is a single, secure platform that has pro-active configuration suggestions, maintains user access controls, versioning, monitoring, and logs.

From data access to product overview to data prep for business orders and reporting Enterprise Enabler is the single data integration platform that supports it all.



Friday, September 14, 2018

#FBF Stone Bond Technologies Summer Internship


In honor of #FBF and school being back in full swing, we've asked some of our summer interns to share first-hand their perspective on Stone Bond Technologies internship program. 

My time at Stone Bond Technologies was invaluable to me. The most beneficial aspect of my internship was the opportunity to experience firsthand how a company operates on a daily basis. With three years of computer science under my belt, this internship has been more helpful than many of my classes in preparing me for a career. I began the summer by working to update the software available in the Amazon Marketplace.  I had never done anything like this before, but with the help of my coworkers, I was able to figure it out. After this, I began getting familiar with the code behind the software. This prepared me for my final task of researching machine learning and trying to figure out how to improve it in the software. This was an exciting challenge because I had never worked with machine learning in school. I am incredibly grateful for the real world experience and opportunity to learn new areas of computer science that I could not have received in school.
– Chance Morris, Student at University of Houston 

Interning at Stonebond Technologies is what most dream of a warm, supportive and open environment that not only works to highlight your skills but encourages the development of new ones. My experience interning at this company was truly unique. I got to work directly with the software and was always pushed to try harder and do better. They set high goals, and even if they were not met, the employees at Stone Bond Technologies made sure you landed amongst the stars. Every day was an opportunity to learn, with every question (of which there were many) was answered promptly and thoroughly.

I truly loved being able to work on multiple areas such as AI, data virtualization, and quality assurance. I was able to gain experience in SQL, Postman, TFS, and so much more. Working here is a reminder that good work is genuinely about consistency and the willingness to learn and improve oneself. The skills that I learned, both technology wise and corporate wise, are indispensable and for that I am grateful. Stone Bond Technologies is a small company with a true soul that cares about its employees and its customer's best interests.
– Gabriela Marin, Student at University of Houston 

Go Coogs! Wishing all students all the best this year!! – SBT Team

Monday, April 30, 2018

Why Automating access to your critical Data is Important to you.

When we talk about data accessibility in terms of critical thinking, a question we often ask our clients is: Why do you think access to your critical data is important?  The answer is rarely obvious and most often that the client typically doesn’t even have access to critical data, or else doesn’t have access to it in time to take action.

At this point, many solution providers will immediately jump to the “How” of their solution, and dive into all the cool features and bells and whistles, all without fully understanding why accessing data efficiently is important to their client, much less what decision the Client needs it for.   Eliciting this answer informs your Client’s most important and relevant use cases around which a new approach makes perfect sense.  So let’s give it some context before we dive into the “how.”

Accessing your data efficiently is about more than just finding and manipulating data from a single source or even heterogeneous platforms. Decision makers need to access the data to develop and deploy business strategies.  Even more importantly, we need to access critical data in time to take tactical action when warranted. The ability to see your business data in real-time also lets you check on the progress of a decision.

Data Virtualization is one solution to get at your critical data in real-time, even if it is scattered across multiple sources and platforms.

The first thing you need to know as a Client is that you won’t have to recreate the wheel and replace one cumbersome approach with different cumbersome approaches. 

Connectors, connectors, connectors.

Typical elements of a useful Data Virtualization paradigm are up-to-date “Out of the Box” data connectors, platform specific protocols, useful API protocols and API-specific coding languages and thought to whether modified data would be written back to a source or new data-set. 

An example of the latter would be updating your balance in a Customer Care web application after making a payment online. In this example, the following is assumed:

  • There are 5 offices across the world. They all speak different languages and have different platform protocols
  • Currently data is scattered in various sources (i.e. cloud, excel, and data marts)
  • It takes too much time to combine the scattered data sources into one report. Transformation is a long process (with many steps) that turn raw data into end-user-ready data (report-ready for short).


With the right Data Virtualization tool, or Data Integration platform, you can consume data in real time.  Without waiting for IT, without making copies, and without physically moving the copied data.



Since Data Virtualization reduces complexity, reduces spend on getting at and using your data, reduces the lag from requesting data to using data, reduces risk from moving and using stale data, and increases your efficiency as a decision maker, data virtualization is almost certainly the path to take here.






This is a guest blog post and the author is annonymous.


















Tuesday, December 19, 2017

The Mysteries of Data Transformation

In the beginning, long ago and far away, I thought all the data integration products had embedded Transformation Engines. After all, the biggest challenge, really, is making disparate data make sense together and able to align appropriately so that the data from sources is meaningful to the target consumer.

Today, data transformation and manipulation is even more critical for Data Virtualization than for ETL, since you have to get it right in one pass. ETL has the dubious “luxury” of adding however many steps and copies along the way that is needed to ease the pain, but it’s imperative to make sure that everything is in good order for cleaning, aligning, filtering, and federating before you present the virtual model for querying.

Chances are you are spending a lot of time and energy preparing and dealing with the dirty details of managing messy data with your current integration or data virtualization product. That's because I was wrong about every integration platform having a transformation engine. 


What is Data Virtualization without a Transformation Engine?
Think about it a bit. Without a legitimate transformation engine, Data Virtualization only can work in a perfect world, where data has been cleaned and where data naturally aligns without manipulation...Maybe you can get away with format differences.
OK, so if the data has already been cleaned, you are not actually getting the data from the source, right? And, isn’t it then carrying the latency of all that housekeeping? Isn’t that counter to what DV is all about?

Of course, there are times when the best overall solution is, in fact, to prepare a clean copy of the data set, and query against it. Often an ODS (Operational Data Store) is the best source to use exactly because proven cleansing algorithms already are in place.  Enterprise Enabler is the only integration Agile ETL™ platform and it can actually do the cleaning as well as the Data Virtualization…. Thanks to the robust embedded Transformation Engine!

Enterprise Enabler® (EE) Transformation Engine is the Great Orchestrator
Recently, I’ve been thinking a lot about our Transformation Engine, and I’ve come to believe that it may be the single most important asset of Enterprise Enabler. When we introduce the architecture and components of the EE platform, we tend to take it for granted, unwittingly doing a disservice to the Transformation Engine with a simple one-liner.  In fact, the Transformation Engine (TE) is the heart and brain of all the logic, run-time processing of data throughout Data Virtualization, Agile ETL™ and all modalities of integration.  We describe it as the conductor, orchestrating and issuing instructions as configured in the metadata.





T.E: “Hey, SAP AppComm!, bring me the data from TemplateA. Merci!” Now, Salesforce AppComm, get the data from Templates. Next, let’s apply the federation and validation rules on each data set and package it as for a federated queryable data model.” Oh, and while you’re at it, send that data directly physically to the Data Warehouse.” “Voila!”

Obviously, this is a simplification, and I may not have gotten the accent quite right, but that EE Transformation Engine is one smart cookie that outperforms the alternative solutions.




Just forget the Legacy Transformation Engines
The old fashioned “Rube Goldberg” process found in the traditional ETL products:
  •          Extract a data set from one source and put it in a data store.
  •          Write custom code to clean and align the data and post it to a database
  •          Repeat with each source…
  •          Invoke many separate specialized utilities for mostly  limited to format conversion

You can see that this legacy approach certainly cannot adapt to Data Virtualization, which must reach live directly into the sources and federate them en route.

What’s different about Enterprise Enabler’s Transformation Engine?
First, a couple of relevant aspects of the Enterprise Enabler platform.  EE is 100% metadata driven. You never need to leave Integrated Development Environment.  Since it is fully extensible to incorporate business rules, cleansing rules, formulas, and processes. It also means that every object is reusable and you can make modifications in a matter of minutes, or even seconds. EE’s single platform handles Data Virtualization, Agile ETL, EAI, ESB, and any hybrid or complex integration pattern. Data workflow orchestration and composite application designer round out the platform. This described framework means that there is a global awareness during execution that enables very complex logic and processing based upon the states of any aspect of the system.


Some of the capabilities of Enterprise Enabler Transformation Engine:



The Bottom Line
EE’s Transformation Engine streamlines and ensures end-to-end continuity in configuring and processing all data integration patterns, including Agile ETL and Virtual Data models, providing
  • Shorter time to value
  • Improve data quality
  • Rapid configuration 
  • Re-usability eliminating hand coding tools

It truly is the heart and brain of Enterprise Enabler. To learn more make sure to check out our Transformation Engine whitepaper (here).

Wednesday, August 9, 2017

Time to Replace and Rip? Yes!

Until recently, the concept of Rip and Replace always carried a terrific fear component. My mother would have rather heard a litany of curses than hearing “Rip and Replace” anywhere in sight of her data center. But, alas! Times change.

Informatica, IBM, Tibco and others, have gone the way of punched cards, Cobol, and Fortran. Data Warehouses have served well now, for a couple of decades, but the overhead and slowness continue to build up tech debt as tech teams fail to keep up with the requisite pace of business. Some businesses will keep trying to cajole their ancient software to mimic today’s technologies as they plod forward trying to remain competitive. They won’t succeed though. You just can’t squeeze agility out of a pipe wrench. 


I’m convinced that if my mother had met Data Virtualization, for instance, before she walked out the door, she would have been the first to jump in. She always embraced new ideas, but she also exercised a pragmatic skepticism.

Well Ma, it’s time. All the smart companies are doing it. Not Rip and replace, really. It’s more like Replace and Rip.

What I’m talking about is an orderly modernization path that surprisingly quickly replaces those ancient approaches to data integration that businesses put so much effort into, not to mention money. Huge teams still are spending years integrating across multiple systems, and the cost of every small modification could feed an army. It’s time to get serious about this relatively new Data Virtualization(DV) paradigm. If you don’t know about DV, better wake up and check it out. And while you’re at it, take a look at Agile ETL™. The two together will take you quickly from what Gartner calls your “Mode one” clunky IT infrastructure to a “Mode 2,” embracing mobile, IoT, Cloud/hybrid and all manner of digital.


Here’s a quick overview of DataVirtualization sometimes referred to as "Logical Data Warehouse:" Instead of gathering data physically into a staging database or warehouse, a virtual data model is defined, and all of the participating data sources are logically aligned with transformations, validations, and business rules. The virtual models are packaged as OBC, JDBC, Odata, and other services. When the virtual data model is queried, the DV reaches out live to the sources, applies all the configured rules, resolves the queries, and delivers the data to the calling program. Many companies are getting familiar with DV by leveraging it for their latest wave of Business Intelligence and Analytics.  

Here is a quick overview of Agile ETL: There is finally a technology to significantly streamline ETL. That is to leverage the same type of Federation used in DV, for moving data physically to another application or database. Stone Bond’s Enterprise Enabler® (EE) supports rapid configuration of the Federation, validations, and business rules, which are executed live across all the sources, and delivers the data in the exact form required by the destination live, without any staging. Just think about the amount of infrastructure that you can eliminate. Ma would be all over it!

So, there you are:  Rip out the old and Slip in the new. Or rather, Slip in the new and Rip out the old.


Friday, June 9, 2017

Agile MDM - 1 Myth & 2 Truths

MDM using Data Virtualization

If there’s anything that can benefit from agility, it is most certainly Master Data Management. Untolled MDM projects flat-out failed over the last 15 years? Moreover, why? Must be largely because any dynamic corporation (growing business constantly changes) and with change comes the demand for Master Data to reflect the current reality of at hand immediately. That’s not possible with legacy methodologies and tools.

With the advent of Data Virtualization, “Enterprise Master Services” or “KPIs” are always fresh and accurate ( with the most recent information). This approach significantly reduces the number of copies of data, thereby reducing the chance of discrepancies across instances of data. Data remains in the original sources of record and is accessed as needed on demand for Portals, BI, Reporting, and Integration.

Furthermore, it is not really necessary to define an "everything to everybody" Master definition. Think about it more like an organic approach, growing and changing the models, creating new versions for specific use cases or constituents. The key there is that Enterprise Enabler® (EE) tags every object with notes and keywords as well as the exact lineage, so that a search will find the best fit for the use.

Doesn’t Data Virtualization mean you’re getting a Streaming Data Point?

No, it does not, this is the myth. I often hear the following concern: “If I want to get the KPI, I don’t want just the current live value, I want the last month value or even some specific range of days.”  The answer is, Data Master is actually a virtual data model defined as a set of metadata that indicate all of the sources, and all of the security, validation, federation, and transformation logic. When the virtual model is queried, Enterprise Enabler® reaches out to the endpoints, federates them, applies all other logic, resolves the query, and returns the latest results. So the data set returned depends on the query. In other words, a Master Data Service/Model resolves the query and retrieves data live from the sources of record, and delivers the latest data available along with the historical data requested.

In the case where the model consists of real-time streaming data, of course, you are interested in the live values as they are generated. These models still apply business logic, Federation, and such, and you have some way to consume the streaming data, perhaps continuous updates to a dynamic dashboard. However,  that’s not what makes MDM Agile.

The Challenge of Change

The more dynamic your business, the more important agility becomes in Master Data Management. Applications change, new data sources come along, processes and applications move to cloud versions. Companies are acquired, and critical business decisions are made that impact operation and the shape of business processes. All of these changes could mean updates need to be applied to your Master Data Definitions. The truth is, with legacy MDM methodologies (the definition, programming, and approvals) will be calculated in months, while you are impeding the progress and alignment of new business processes.

What’s the “Agile” part of Enterprise Enabler's MDM?

Agile MDM is a combination of rapidly configuring metadata-based data Masters, efficiently documenting them, “sanctioning” them and making them available to authorized users. Ongoing from there, it is a matter of being able to modify data masters in minutes with versioning, and moving to the corrected or updated service/model. It’s also about storing physical Master data sets only when there is a true need for them.

Ready for the second truth? When you use an Agile Data Virtualization technology such as StoneBond’s Enterprise Enabler®, along with proper use of its data validation and MDM processes for identifying, configuring, testing, and sanctioning Data Masters, you are applying agile technology, and managed agile best practices, to ensure a stable, but flexible, MDM operation. Enterprise Enabler offers the full range of MDM activities in a single platform.

The diagram below shows the basic process for Agile MDM that is built into the Enterprise Enabler.


Step 1.  A programmer or DBA configures a Data Master as defined by the designated business person.

Step 2. The Data Steward views lineage, authorization, tests, augments notes and sanctions the model as an official Master Data Definition.

Step 3. The approved Data Master is published to the company MDM portal for general usage.

Thursday, April 6, 2017

5 Reasons you should leverage EE BigDataNOW™ for your Big Data


Big Data has been swooping the BI and Analytics world for a while now. It’s touted as the better way of Data Warehousing for Business Intelligence (BI) and Analytics (BA) projects. It has removed hardware limitations on storage and data processing. Not to mention, it has broken the barriers of schema and query definitions. All of these advancements have sprung the industry in a forward direction.

Literally, you can dump any data in any format and start building analytics on the records. We mean any data whether it’s a file, table, object, or in any schema into Hadoop.



1. EE BigDataNOW™ will organize your Big Data repositories no matter the source

Ok, so everything is good until you realize all your data is sitting in your Hadoop clusters or Data Lakes with no way out; how are you supposed to understand or access your data? Can you even trust the data that is in there? How can you ensure everyone who needs access has a secure way of retrieving the data? How do you know if the data is easy to explore and understand for the average user?
Most importantly, how do you start exposing your Big Data store with API’s that are easy to use and create? These are some of the questions you are faced with when you want to make sense of you Big Data repositories.

Stone Bond’s EE BigDataNOW™ helps you achieve the “last-mile” of your Big Data journey. It helps you organize your Big Data repositories, whether in a Lake, in the cloud or on-premise, EE helps make sense of all the data for your end users to access. Users will be able to browse the data with ease and expose it through APIs. EE BigDataNOW™ lets you organize the chaos and madness that the data loading individuals uploaded.

2. Everyone is viewing and referencing the same data

For easy access to the data, Stone Bond provides a Data Virtualization Layer for your Big Data repository that organizes the data into logic models and APIs. It lets you provide a mechanism for administrators to build logical views with secure access to sensitive data. Now everyone is seeing the same data and not different versions of it. This reduces the confusion by providing a clear set of Master Data Models and trusted data sets that are sanctioned to have the accurate data for their needs. It auto-generates APIs for the models on the fly so users can access the data through SOAP/REST or OData and be able to build dashboards and run analytics on the data. It also provides a clean queryable SQL interface, so users are not learning new languages or writing many lines of code. It finally brings a sense of calmness and sureness that is needed for true Agile BI development.

3. It’s swift … did we mention you access & federate your data in real-time?

EE BigDataNOW™ can be a valuable component on the ingestion side of the Big Data store too; it will federate, apply transformations and organize the data to be loaded into the Data Lake using its unique Agile ETL capabilities, making your overall Big Data experience responsive from end to end. EE BigDataNOW™ has a fully UI driven, data workflow engine that loads data into Hadoop whether its source is streaming data or stored data. It can federate real-time data with historical data on demand for better analysis.

4. Take the load off your developers

One of the major complexities that Big-Data developers run into is building and executing the Map-Reduce jobs as part of the data workflow. EE BigDataNOW™ can create and execute Map-Reduce jobs through its Agile ETL Data Workflow Nodes; this will help run Map-Reduce jobs and store results in a meaningful, easy way for end users to be able to access the Map-Reduce jobs.



5. EE BigDataNOW™ talks to your other non-Hadoop Big Data sources

EE BigDataNOW™ includes non-Hadoop sources such as Google Big Query, Amazon Redshift, SAP HANA, etc. EE BigDataNOW™ can also connect to these nontraditional Big Data sources, and populate or federate data from these sources for all your Big Data needs.

To read more about Big Data, don’t forget to check out Stone Bond’s Big Data page. What are you waiting for? Break through your Big Data barriers today!


This is a guest blog post written by,