How Healthy Is Your Salesforce Implementation?

No matter how good, or thorough your original implementation of Salesforce was, it will from time to time need maintenance or even periodically an overhaul. The good news is because it’s a cloud-based solution, Salesforce has your hardware health monitoring and maintenance covered for you. But you are not off the hook for everything: ●  Data quality, duplicates and other small errors can creep into the best designed systems. ●  Depending on the original architecture significate increases in data can start to create performance issues. ●  Salesforce has three upgrades a year that bring new features and subtle changes to the platform that should be reviewed. ●  Your business is constantly evolving and so requirements and business processes will change. ●  Incremental additions such as record types, validation rules, etc. can quickly get out of hand. It is not uncommon to want to rethink the overall architecture of your system based on changes in your business. Constant tweaks, field additions, workflows and approval process modifications added by the day to day administration for the system means the tool evolves away from its original implementation specification. Salesforce, like everything else unfortunately abides by the 2nd law of thermodynamics - everything tends toward disorder. There is perhaps a temptation to follow the adage “if it ain’t broke don’t fix it” or let your Salesforce implementation alone unless it starts to have issues. But this would be at the risk of ignoring another law that we all have experienced at some point, Finagle’s law of dynamic negatives, also known as Finagle’s corollary to Murphy’s law - anything that can go wrong will go wrong and always at the worst or most inconvenient moment. If your business depends on Salesforce then the last thing you want is to be caught by surprise when it starts to yield anomalies or errors that prevent you from running your business. Just as you should get your car serviced, or see your doctor once a year, so too should you have a strategy in place to give your Salesforce implementation a periodic health check. Some of these checks can be in the form of well-designed audit reports that can be run at set intervals to look for early warning signs of trouble. But eventually a full system check that reviews the entire implementation is wise because of: ●  The impact of increasing number of records (code that used to work can create errors later). ●  Salesforce governor limits - how close are you to exceeding them? ●  Salesforce release features. ●  Salesforce critical update implications. ●  Your organization's current structure. ●  Your organization's current business process requirements. ●  Security needs and it’s ever changing best practice. Some organizations may have the skill sets in-house to do such a review but many would be best served turning to specialists to complete such a task. It is the combination of skills required that makes this sort of task better suited to external help. No matter how accomplished your Salesforce internal team is, an independent and unbiased health check-up should be on your Salesforce to do list. Inaction will most assuredly lead to trouble. Proactive preventions will always be the more cost effective strategy so if you need any advice or help, give us a call.

Node.js is for Everyone

When asked about Node.js, most developers would say that they have heard of it and may have even used it in some aspect of a project. When asked what Node.js is, the conversation tends to take a different turn into different technology “buzzwords” such as Meteor, MongoDB, and sometimes even Express will come up. The point is that while a lot of developers have heard of Node.js that is about as far as it goes. Node.js is a newer technology that is rapidly climbing to the top in the technology stacks of a multitude of large companies. Wal-Mart, PayPal, Netflix, LinkedIn, and many others are using the technology on a daily basis. So what exactly does Node.js do? Simply put, Node.js is a server side JavaScript runtime. It is single threaded, asynchronous, and event driven. Node.js is built with Google’s Chrome V8 JavaScript engine, it compiles the JavaScript source code to native machine code instead of interpreting it in real time, which in turn allows JavaScript on the server side to run almost as fast as compiled code. Node.js uses an event loop to process incoming requests. This means that as requests come in, Node.js will process them one at a time and send callbacks out as a response. Node.js is capable of running on a majority of different types of servers. Windows, Linux, Mac OS, UNIX, and ARM are just some of the servers that Node.js can be and has been deployed on. Not only will it run on just about anything, it is open source as well. Pre-built binaries and the source code is available on the Node.js site. Node.js is a server side JavaScript interpreter, so how do things get done with Node.js? This is done through modules, third party libraries, and other frameworks to expand Node. These packages are managed with the Node.js package manager called npm (node package manager). There are frameworks for MVC (Model-View-Controller), desktop applications, web servers, and just about anything else you would want. Node.js is definitely worth looking into as a developer, even if it is just to become more familiar with the terminology and different technologies associated with it because it is not going away and will only get bigger. More and more companies are finding internal uses for Node.js and a lot are using it to build web apps, mobile sites, desktop apps, or just using it to host their web API. It is lightweight, versatile, and doesn’t require much to run efficiently. It is definitely a technology to keep track of.

Where is Your Salesforce Journey Taking You?

A decade ago, Salesforce was a small, disruptive player in the Customer Relationship Management (CRM) space. What set them apart from their competition was that they were a Software as a service or SaaS. It was important to Marc Benioff (founder, Chairman and CEO of Salesforce) that this application be “clicks not code.” A solution that did not require infrastructure management and reduced the technical expertise that was typically required of companies to configure and set up CRM solutions. Today Salesforce accounts for over 18% of all global spend for CRM software vs around 12% for SAP, 9% for Oracle and 6% for Microsoft. With offerings that include a sales cloud, service cloud, marketing cloud, the Force platform and several other services, they’ve moved well beyond the realm of just a CRM tool. It’s now being used for HR solutions, recruiting, project management, agile management, vendor management and much, much, more. It’s hard to believe that organizations leave a lot of these new options untapped. Many companies are still using Salesforce to just run their sales, marketing or customer service centers, which is kind of like purchasing a membership to a country club. It’s REALLY expensive if you only used the club to play tennis, but it’s a great value if you utilize all of the club’s facilities such as golf, tennis, the pool and its other amenities. Likewise, if you’re paying for Salesforce licenses but only running your sales team in Salesforce then you’re not receiving your full return on investment (ROI). By carefully leveraging the variety of licenses that Salesforce offers, you can create a high-value, company-wide business management solution that, when developed correctly, easily integrates into other key systems like your ERP or back-end finance system. If you haven’t done so already, it’s time to review the costs of consuming and maintaining several different licenses that could potentially be absorbed by Salesforce. Ask questions like: how is your data quality? What does adoption look like? How efficient is your licensing plan? Etc. Take the time to analyze your current situation and consider developing a consolidation and architecture plan that maximizes the potential of Salesforce as well as your other corporate sub-systems. And remember, just because the implementation consultants have left, it doesn’t mean that the project is complete. On the contrary, it needs to be proactively managed because your Salesforce journey is really just beginning.

SQLite Implementation

We’ve all been in situations where we needed to store data on a device within a specific application. Not only should the database reside on the machine where the application is running, but it also shouldn’t matter if we have multiple instances connecting to the database. In other words, we only need to provide data to one specific application. For a desktop solution, we could easily setup MSSQL or MySQL; but what if the aforementioned application is running on a mobile device? Similarly, what if we need our database to be portable between devices? SQLite solves all of these requirements. SQLite is a light-weight and self-contained SQL database engine that enables us to put a database on just about any type of device. Also, since SQLite uses a flat-file for the actual database, there is no worry of a complicated setup. Typically, a SQLite installation consists of a flat-file, the SQLite library (in a format such as a .dll file), and the actual application that will be using the database; That’s it! Perhaps now you’re thinking, “That’s all good, but how complicated is SQLite to code for?” The answer to that question is, “Not complicated at all!” To demonstrate the simplicity of implementing SQLite, consider the following C# code snippet: string connectionString = "@"Data Source="C:exampledb.sqlite3" SQLiteConnection db = new SQLiteConnection(connectionString); db.Open(); SQLiteCommand query = db.CreateCommand(); query.CommandText = "SELECT * FROM tblExampleTable"; DataTable dtExampleTable = new DataTable(); SQLiteDataReader dr; dr = query.ExecuteReader(CommandBehavior.CloseConnection); dtExampleTable.Load(drSqliteReader); Now, let’s discuss what this code actually does. In the first line, we’re instantiating a new instance of SQLiteConnection. The argument passed into the constructor provides the location of the flat-file that we will be using for the database. In the case of this example, I used the extension .sqlite3 for my flat-file. Because of the way the file is parsed, the actual extension that you use is irrelevant. Nonetheless, providing a meaningful extension (such as .sqlite3) can give a good indication to others of what the file is as well as the version of SQLite being used. From here, we call the Open() method on our newly-created connection object. This method simply tells SQLite to go ahead and connect to our database, whose connection string was passed to the SQLiteConnection its constructor. Next, using our connection object, we create a SQLiteCommand that will be used to provide the engine with our actual query that we will be running. After assigning said query, using query.CommandText, we instantiate an instance of a standard .NET DataTable. After that, we simply execute our query that we just defined and load the results into the DataTable object that we created. Voila! We now have an object that contains results that we queried straight from a SQLite flat-file. We didn’t have to perform any complicated code (like serialization between .NET and SQLite) to get this going. You would of course want to continue beyond our short code example (e.g. displaying the data in a GridView in ASP.NET), but for the purpose of this blog, that’s all we need.

N-Tier Software Architecture

As many developers probably realize, software development is a very detailed process. Not only must a requirements gathering and specifications documentation process be established, but there is also a need for a pre-development architectural phase. What do I mean by this? Well, you can’t exactly instruct builders to build a house without an architect and electrician laying out the blueprints. Likewise, “Good Software” is usually built on top of pre-planned “blueprints”. As an example, consider a game development studio. Before game developers can implement code written for a game environment, there must first be a game engine. The game engine would provide the developers a set of tools (e.g. lighting, sound, and physics) for developing a game. Programmers don’t necessarily care about the actual environment or what was involved in creating the game engine; they only care about the availability of the tools. In this posting, we will explore one of the most integral methodologies in today’s world of software development. That methodology is known as multi-tier software architecture. N-Tier Architecture (or multi-tier architecture) is an approach to software development in which code and underlying data structures are split into multiple independent layers. This structure is not only a logical separation of layers, but often times a physical separation as well (i.e. different layers on different machines). The first question that may come to mind when considering this approach might be, “Why design software this way? Wouldn’t such a separation introduce unnecessary complexity?” The answer is quite the contrary. By introducing this separation, we are allowing software to be written in a way that enables better extensibility, better usability, and more portability. During my years as an undergraduate C.S. student, we were introduced to programming in several steps. The first step (Structured Programming) was to gain an understanding of basic programming fundamentals such as loops, if-conditions, and methods/functions. The second step was an introduction to one of the most important programming fundamentals to date: object-oriented programming. Beyond this, further instruction was in the way of implementation. This included network programming and UNIX systems programming. These approaches to the art of programming (yes, programming is an art) are vital when learning the fundamentals. However, they don’t exactly teach true business-level software development. We not only want to stick to the laws of object-oriented programming and all of those things we were taught from a beginner perspective, but we also need to provide, again, extensibility, usability, and portability. To gain a true understanding of how n-tier architecture can help us achieve this, we first take a look directly at the multi-layer aspect of this development approach. The standard approach is to split development into three main layers; presentation layer, logic layer, and data access layer. The first layer, the Presentation Tier is the portion of the software that is what the user actually sees, more specifically, the UI. This layer (or tier) is the visual representation of the data that is gathered in the other, not-so-visible layers. The presentation is often in the form of a Graphical User Interface; such as the website that the user actually sees. Not only does this layer typically provide the visual representation of our data, it also provides a means of user-intervention; thus allowing the user to interact with the software environment. This layer should not perform complex logic (e.g. data processing and calculations), as it is the next layer’s job to implement such a task. The second layer in our example is the Logic Tier. This layer is hidden from the front-end user and is in charge of performing calculations, business logic, and/or transferring data between the surrounding layers. This layer is sometimes considered the middleware layer (e.g. ASP.NET). The final layer in our example (keeping in mind that there could be more layers; I’ve chosen two for this example) is the Data Tier. This layer is in charge of moving data between the logic tier and the database. No complex logic or calculations should be performed at this layer. Simple logic, such as determining how to place data in an object for the other layers to utilize, is usually okay. Complex logic is the responsibility of the aforementioned logic tier. By separating our software into the various layers described above, we can save ourselves a lot of development time in the future. Why would we save time? Here’s a scenario that I’ll use to help explain. Let’s say we’re developing complex software that needs to access the database in several different places. Let’s also say that the project architect decides to establish a connection to said database in each portion of the application that needs to access the database. Now, finally, let’s say that the means of connection was specific to the type of database (e.g. SQLServer, MySQL, etc) that we’re connecting to. What is the most obvious problem that we might run into? If you guessed extensibility and/or portability, then you guessed right. Let’s say the application was built to use MySQL but down the road, we need to switch to SQLServer. In the architectural setup described above, we would be tasked with stripping out all old connection instances. Depending on the complexity of the application, this could become messy really quickly. The n-tier setup helps alleviate this problem as we’ve made presentation, logic, and data access completely independent of one another. To provide a better explanation, let’s consider the following as an alternative to the overly cluttered scenario described above. If we instead follow an n-tier setup, switching to SQL Server from MySQL in the data tier would be a simple switching out of the previously used data layer with our SQL Server layer. Ideally, we wouldn’t have to change a single line of code in the other two layers as they do not care how data is retrieved in the lowest layer. The Logic tier simply makes calls to the DAL and as long as the method signatures have not changed, nothing ever changed, as the logic tier is concerned. This swapping out of layers can be done at any level. So, if the presentation layer needed to change, as before, the other two layers would not care. Now that we finally have an idea of what multi-tier is, one of the most important things to consider about this way of thinking is the portability that it provides. A major advantage provided by portability is the ability to have not only a logical separation of the different layers but a physical separation as well; each independent layer can exist on different machines. The physical isolation is where we draw the major line between n-tier and what is known as MVC (model-view-controller – we will discuss this in a later blog…) Now that you have discovered n-tier, an entirely new world of software architecture has been opened in front of you (well, hopefully :-))

Extraction, Transformation and Loading

Depending on the source systems and the type of data basis, the process of loading data into the SAP BW is technically supported in different ways. In the conception phase, the system firstly needs to detect the different data sources in order to be able to transform the data with the suitable tool afterwards. Data Basis Additional heterogeneous data can be loaded alongside the original mySAP.com components that provide data via extractors: Flat files: A flat file in ASCII or CSV format can automatically be read by the SAP BW standard. Data providers: Providers such as Dun & Bradstreet and AC Nielsen US provide data which already has an import-friendly format. XML: XML data can also be processed in SAP BW. Data Staging Tools DB Connect: Allows relational databases to be accessed directly. Here, SAP DB MultiConnect is used to create a connection to the database management system (DBMS) in the external database. By importing metadata and original data, the necessary structures can be generated in SAP BW, and the data can be loaded without problem. ETL tools (for example, DataStage): In heterogeneous system landscapes, an important requirement is that the different data structures and content are consolidated before being loaded into SAP BW. You can use an ETL tool such as Ascential DataStage to load data from heterogeneous systems, such as Siebel and PeopleSoft, transform this data into a single format and then load it via a Business Programming Interface into SAP BW. UD Connect: Using UD Connect, you can access just about all relational and multi-dimensional data sources. UD Connect transfers the data as flat data. Multi-dimensional data is converted to a flat format when UD Connect is used. Interfaces BW Service Application Programming Interface (SAPI): A SAPI is an SAP-internal component that is delivered as of Basis release 3.1i. Communication between mySAP Business Suite components and SAP BW takes place via this SAPI. BAPI: Like the SAPI, a BAPI is also used for the structured communication between SAP BW and external systems. Both data providers and ETL tools use this interface. FILE: SAP automatically supports automatic import of files in CSV or ASCII format for flat files as standard. Simple Object Access Protocol (SOAP): The SOAP RFC Service is used to read XML data and to store it in a delta queue in SAP BW. The data can then be processed further with a corresponding DataSource and SAPI. UD Connect (Universal Data Connect): For the connection to data sources, UD Connect can use the JCA-capable (J2EE Connector Architecture) BI Java Connectors that are available for various drivers, protocols and providers as resource adapters.

Sign up for our blog updates!