Don’t Let These Four Fears Keep You From the Cloud
“Never let the fear of striking out keep you from playing the game.” Babe Ruth’s famous quote reminds me of the advice I often give to clients who are considering a move to the Cloud. Some organizations let their fear of change guide future plans, despite the Cloud’s many benefits which include cost savings, higher velocity and more opportunity for innovation. That’s why I’ve assembled the five most common concerns, and why they shouldn’t stop you from moving to the Cloud. Security There are numerous solutions to address this concern, but first and foremost it’s important to note that the public Cloud has had many years to mature and become battle tested; companies like Amazon and Google have spent a lot of money tackling security issues head-on. In fact, it’s more secure than your private server, which tends to be more vulnerable due to things like missed software updates, network compromise, or social exploits. However, if you’re still wary of keeping your company’s sensitive data on the Cloud, you can keep key data on a private server and move the less sensitive data to a serverless Cloud. 2. Legacy application issues Clients often think they have an app or process that they think can’t move to the Cloud, whether that’s because it’s built on special architecture or maybe tied to a specific piece of hardware in their center. And while it may be true that your business is tied to a legacy application now, switching to the Cloud can provide you with a level of independence you couldn’t dream of with private servers. 3. Organizational barriers Having a diverse team with different viewpoints and areas of expertise is a wonderful thing, but when it comes to being ready to switch to the Cloud, others on your team may not be as ready as you are. Consider this the perfect opportunity to be your project’s evangelist. In the Cloud, you can try all the new projects you’ve dreamed up for less than the cost of doing three of them in a serverless environment. I’ve found that the difference between a growing business and a stagnant one is whether you’re willing to embrace change. 4. Data handling concerns Sometimes companies are bound by data use agreements or regulatory concerns that make hosting their data on a public or hybrid cloud much more difficult. Maybe that data is jointly owned with business partners, or was collected under a restrictive data use agreement. Often, the data itself has been aggregated over years or decades from all sorts of sources, and you’re just not sure what you have to do to untangle all these concerns. There’s almost always an answer involving the correct security or private cloud integration. Additionally, you can find well-informed attorneys to help you understand what your responsibilities are if you don’t have that expertise in-house. I understand that moving to the Cloud can be an intimidating process, especially if you or someone on your team has any of these concerns. However, the rewards that accompany a shift to the Cloud can help your organization save money, move your business forward, and make life a lot easier for your developers. Listen to Babe Ruth – don’t let fear stop you from reaping the benefits of moving to the Cloud. WATCH TECH IN 2: ARE YOU READY FOR THE CLOUD? About the Author: Originally a medical researcher, Hoke Currie has been leading data analytics consulting and crossover software development teams since 1994 for clients including the US Army, CMS, Sodexo, DuPont, the World Bank, UPS, the Bill & Melinda Gates Foundation, and the International Monetary Fund. Formerly a managing partner at GraySail for more than sixteen years, he is currently a Principal Consultant at Rural Sourcing, and the Java practice lead in the Augusta Development Center. His specialties include Java, Agile development, Cloud adoption and migration, and large scale data analytics / machine learning.
Five Ways For an Easier Transition to AWS
While living in the Cloud has become the new normal for many companies, there are still plenty out there that have yet to make the move. It’s a huge effort for your development team, with a lot of room for error. That’s why I’ve assembled these five crucial steps to help you work through a successful AWS implementation. Even if you’re not involved in setting up the architecture, this guide will help you gain a much deeper understanding of AWS functions. Create a plan, but be willing to pivot Fail to plan and you plan to fail, right? No successful AWS implementation happens without a solid plan in place, but remember that things do (and will) come up: third party dependency problems, requirements changes, even service outages. When you’re getting started, some important questions to ask include “what are the application’s goals?”, “what kind of traffic will it have?”, “how is the app going to be built?”, and “where will your team be located?”. Perform Infrastructure as Code to avoid the human error element that comes with doing things manually. And anticipate failure and have a procedure in place to respond, be it disaster recovery or remediation documentation. Make security a top priority These days, large corporate security breaches are pretty much guaranteed to make the news. So, keep your company happy (and off of CNN) by making sure your AWS implementation is secure. To start, use AWS Identity and Access Management (IAM) to create a security identity. If you don’t have one in place, it’s hard to get everyone on your team to do their part. And when everybody isn’t doing their part, gaps can get overlooked. You’ll also want to prepare for security events ahead of time. For example, if an unknown user or strange traffic pattern arises, do you want an automated or human response? Or both? The sooner you get your eyes on a security problem, the sooner you can get it solved and potentially lessen severity. Manage costs... ahead of time Keeping your costs in check is an important part of the implementation process. Over provisioning of resources is all too common, as well as the unexpected costs in transferring data in and out of the Cloud. If your need for resources fluctuates throughout the year, a consumption model can help. By adopting this type of operating model, you utilize auto-scaling and only pay for the resources that are required. Just be careful, though. With the benefits of auto-scaling come a few drawbacks: if only the application scales up, the infrastructure that supports it could potentially become a bottleneck, and cause delays or timeouts; or if the database isn’t scaling at the same rate as the web application, this could cause connection issues while the read/write operations struggle to keep up. Ensure reliability Sure, the AWS SLA is reliable, but what can happen in the procedures that take place between you and AWS? Network problems, power loss… you’ll want to test your recovery procedures in advance so you can automatically recover from any failures in the future. Another way to ensure reliability is to stop guessing capacity. I’ve heard time and time again from clients that “it’ll never be more than x amount,” and then sure enough, it ends up over the estimate. Now you’ve been impacted, whether that’s the extra amount of time spent, or a dollar amount lost. If you’re on a consumption model with auto-scaling, it just gets done. Optimize overall performance Rather than having your team learn how to host and run a new technology, that technology can be consumed as a service. For example, NoSQL databases, media transcoding, and machine learning require expertise that isn’t always available on every team. AWS has services that can be consumed while the team continues to focus on product development and business value, instead of trying to master something new. Another way to optimize site performance is by using serverless architectures: they’re easy to deploy, low cost, scalable, and allow for continuous improvement. Finally, be sure to use some “mechanical sympathy.” That means understand and deploy technology that best aligns with what you’re trying to achieve. For example, consider data access patterns when selecting database or storage approaches. You don’t want to choose completely opposing technologies for your stack. AWS implementation can be a big undertaking, but with the right team in place and a well-thought-out strategy, transitioning to the Cloud is easier than you think. Learn more about how Rural Sourcing has helped clients implement AWS for their organizations by visiting our Results page. LEARN MORE ABOUT OUR AWS CAPABILITIES
How Healthy Is Your Salesforce Implementation?
No matter how good, or thorough your original implementation of Salesforce was, it will from time to time need maintenance or even periodically an overhaul. The good news is because it’s a cloud-based solution, Salesforce has your hardware health monitoring and maintenance covered for you. But you are not off the hook for everything: ● Data quality, duplicates and other small errors can creep into the best designed systems.● Depending on the original architecture significate increases in data can start to create performance issues.● Salesforce has three upgrades a year that bring new features and subtle changes to the platform that should be reviewed.● Your business is constantly evolving and so requirements and business processes will change.● Incremental additions such as record types, validation rules, etc. can quickly get out of hand. It is not uncommon to want to rethink the overall architecture of your system based on changes in your business. Constant tweaks, field additions, workflows and approval process modifications added by the day to day administration for the system means the tool evolves away from its original implementation specification. Salesforce, like everything else unfortunately abides by the 2nd law of thermodynamics - everything tends toward disorder. There is perhaps a temptation to follow the adage “if it ain’t broke don’t fix it” or let your Salesforce implementation alone unless it starts to have issues. But this would be at the risk of ignoring another law that we all have experienced at some point, Finagle’s law of dynamic negatives, also known as Finagle’s corollary to Murphy’s law - anything that can go wrong will go wrong and always at the worst or most inconvenient moment. If your business depends on Salesforce then the last thing you want is to be caught by surprise when it starts to yield anomalies or errors that prevent you from running your business. Just as you should get your car serviced, or see your doctor once a year, so too should you have a strategy in place to give your Salesforce implementation a periodic health check. Some of these checks can be in the form of well-designed audit reports that can be run at set intervals to look for early warning signs of trouble. But eventually a full system check that reviews the entire implementation is wise because of: ● The impact of increasing number of records (code that used to work can create errors later).● Salesforce governor limits - how close are you to exceeding them?● Salesforce release features.● Salesforce critical update implications.● Your organization's current structure.● Your organization's current business process requirements.● Security needs and it’s ever changing best practice. Some organizations may have the skill sets in-house to do such a review but many would be best served turning to specialists to complete such a task. It is the combination of skills required that makes this sort of task better suited to external help. No matter how accomplished your Salesforce internal team is, an independent and unbiased health check-up should be on your Salesforce to do list. Inaction will most assuredly lead to trouble. Proactive preventions will always be the more cost effective strategy so if you need any advice or help, give us a call. NEED HELP? LET'S TALK.
Where is Your Salesforce Journey Taking You?
A decade ago, Salesforce was a small, disruptive player in the Customer Relationship Management (CRM) space. What set them apart from their competition was that they were a Software as a service or SaaS. It was important to Marc Benioff (founder, Chairman and CEO of Salesforce) that this application be “clicks not code.” A solution that did not require infrastructure management and reduced the technical expertise that was typically required of companies to configure and set up CRM solutions. Today Salesforce accounts for over 18% of all global spend for CRM software vs around 12% for SAP, 9% for Oracle and 6% for Microsoft. With offerings that include a sales cloud, service cloud, marketing cloud, the Force platform and several other services, they’ve moved well beyond the realm of just a CRM tool. It’s now being used for HR solutions, recruiting, project management, agile management, vendor management and much, much, more. It’s hard to believe that organizations leave a lot of these new options untapped. Many companies are still using Salesforce to just run their sales, marketing or customer service centers, which is kind of like purchasing a membership to a country club. It’s REALLY expensive if you only used the club to play tennis, but it’s a great value if you utilize all of the club’s facilities such as golf, tennis, the pool and its other amenities. Likewise, if you’re paying for Salesforce licenses but only running your sales team in Salesforce then you’re not receiving your full return on investment (ROI). By carefully leveraging the variety of licenses that Salesforce offers, you can create a high-value, company-wide business management solution that, when developed correctly, easily integrates into other key systems like your ERP or back-end finance system. If you haven’t done so already, it’s time to review the costs of consuming and maintaining several different licenses that could potentially be absorbed by Salesforce. Ask questions like: how is your data quality? What does adoption look like? How efficient is your licensing plan? Etc. Take the time to analyze your current situation and consider developing a consolidation and architecture plan that maximizes the potential of Salesforce as well as your other corporate sub-systems. And remember, just because the implementation consultants have left, it doesn’t mean that the project is complete. On the contrary, it needs to be proactively managed because your Salesforce journey is really just beginning.
We’ve all been in situations where we needed to store data on a device within a specific application. Not only should the database reside on the machine where the application is running, but it also shouldn’t matter if we have multiple instances connecting to the database. In other words, we only need to provide data to one specific application. For a desktop solution, we could easily setup MSSQL or MySQL; but what if the aforementioned application is running on a mobile device? Similarly, what if we need our database to be portable between devices? SQLite solves all of these requirements. SQLite is a light-weight and self-contained SQL database engine that enables us to put a database on just about any type of device. Also, since SQLite uses a flat-file for the actual database, there is no worry of a complicated setup. Typically, a SQLite installation consists of a flat-file, the SQLite library (in a format such as a .dll file), and the actual application that will be using the database; That’s it! Perhaps now you’re thinking, “That’s all good, but how complicated is SQLite to code for?” The answer to that question is, “Not complicated at all!” To demonstrate the simplicity of implementing SQLite, consider the following C# code snippet: string connectionString = "@"Data Source="C:exampledb.sqlite3" SQLiteConnection db = new SQLiteConnection(connectionString); db.Open(); SQLiteCommand query = db.CreateCommand(); query.CommandText = "SELECT * FROM tblExampleTable"; DataTable dtExampleTable = new DataTable(); SQLiteDataReader dr; dr = query.ExecuteReader(CommandBehavior.CloseConnection); dtExampleTable.Load(drSqliteReader); Now, let’s discuss what this code actually does. In the first line, we’re instantiating a new instance of SQLiteConnection. The argument passed into the constructor provides the location of the flat-file that we will be using for the database. In the case of this example, I used the extension .sqlite3 for my flat-file. Because of the way the file is parsed, the actual extension that you use is irrelevant. Nonetheless, providing a meaningful extension (such as .sqlite3) can give a good indication to others of what the file is as well as the version of SQLite being used. From here, we call the Open() method on our newly-created connection object. This method simply tells SQLite to go ahead and connect to our database, whose connection string was passed to the SQLiteConnection its constructor. Next, using our connection object, we create a SQLiteCommand that will be used to provide the engine with our actual query that we will be running. After assigning said query, using query.CommandText, we instantiate an instance of a standard .NET DataTable. After that, we simply execute our query that we just defined and load the results into the DataTable object that we created. Voila! We now have an object that contains results that we queried straight from a SQLite flat-file. We didn’t have to perform any complicated code (like serialization between .NET and SQLite) to get this going. You would of course want to continue beyond our short code example (e.g. displaying the data in a GridView in ASP.NET), but for the purpose of this blog, that’s all we need.
N-Tier Software Architecture
As many developers probably realize, software development is a very detailed process. Not only must a requirements gathering and specifications documentation process be established, but there is also a need for a pre-development architectural phase. What do I mean by this? Well, you can’t exactly instruct builders to build a house without an architect and electrician laying out the blueprints. Likewise, “Good Software” is usually built on top of pre-planned “blueprints”. As an example, consider a game development studio. Before game developers can implement code written for a game environment, there must first be a game engine. The game engine would provide the developers a set of tools (e.g. lighting, sound, and physics) for developing a game. Programmers don’t necessarily care about the actual environment or what was involved in creating the game engine; they only care about the availability of the tools. In this posting, we will explore one of the most integral methodologies in today’s world of software development. That methodology is known as multi-tier software architecture. N-Tier Architecture N-Tier Architecture (or multi-tier architecture) is an approach to software development in which code and underlying data structures are split into multiple independent layers. This structure is not only a logical separation of layers, but often times a physical separation as well (i.e. different layers on different machines). The first question that may come to mind when considering this approach might be, “Why design software this way? Wouldn’t such a separation introduce unnecessary complexity?” The answer is quite the contrary. By introducing this separation, we are allowing software to be written in a way that enables better extensibility, better usability, and more portability. During my years as an undergraduate C.S. student, we were introduced to programming in several steps. The first step (Structured Programming) was to gain an understanding of basic programming fundamentals such as loops, if-conditions, and methods/functions. The second step was an introduction to one of the most important programming fundamentals to date: object-oriented programming. Beyond this, further instruction was in the way of implementation. This included network programming and UNIX systems programming. These approaches to the art of programming (yes, programming is an art) are vital when learning the fundamentals. However, they don’t exactly teach true business-level software development. We not only want to stick to the laws of object-oriented programming and all of those things we were taught from a beginner perspective, but we also need to provide, again, extensibility, usability, and portability. To gain a true understanding of how n-tier architecture can help us achieve this, we first take a look directly at the multi-layer aspect of this development approach. The standard approach is to split development into three main layers; presentation layer, logic layer, and data access layer. The first layer, the Presentation Tier is the portion of the software that is what the user actually sees, more specifically, the UI. This layer (or tier) is the visual representation of the data that is gathered in the other, not-so-visible layers. The presentation is often in the form of a Graphical User Interface; such as the website that the user actually sees. Not only does this layer typically provide the visual representation of our data, it also provides a means of user-intervention; thus allowing the user to interact with the software environment. This layer should not perform complex logic (e.g. data processing and calculations), as it is the next layer’s job to implement such a task. The second layer in our example is the Logic Tier. This layer is hidden from the front-end user and is in charge of performing calculations, business logic, and/or transferring data between the surrounding layers. This layer is sometimes considered the middleware layer (e.g. ASP.NET). The final layer in our example (keeping in mind that there could be more layers; I’ve chosen two for this example) is the Data Tier. This layer is in charge of moving data between the logic tier and the database. No complex logic or calculations should be performed at this layer. Simple logic, such as determining how to place data in an object for the other layers to utilize, is usually okay. Complex logic is the responsibility of the aforementioned logic tier. By separating our software into the various layers described above, we can save ourselves a lot of development time in the future. Why would we save time? Here’s a scenario that I’ll use to help explain. Let’s say we’re developing complex software that needs to access the database in several different places. Let’s also say that the project architect decides to establish a connection to said database in each portion of the application that needs to access the database. Now, finally, let’s say that the means of connection was specific to the type of database (e.g. SQLServer, MySQL, etc) that we’re connecting to. What is the most obvious problem that we might run into? If you guessed extensibility and/or portability, then you guessed right. Let’s say the application was built to use MySQL but down the road, we need to switch to SQLServer. In the architectural setup described above, we would be tasked with stripping out all old connection instances. Depending on the complexity of the application, this could become messy really quickly. The n-tier setup helps alleviate this problem as we’ve made presentation, logic, and data access completely independent of one another. To provide a better explanation, let’s consider the following as an alternative to the overly cluttered scenario described above. If we instead follow an n-tier setup, switching to SQL Server from MySQL in the data tier would be a simple switching out of the previously used data layer with our SQL Server layer. Ideally, we wouldn’t have to change a single line of code in the other two layers as they do not care how data is retrieved in the lowest layer. The Logic tier simply makes calls to the DAL and as long as the method signatures have not changed, nothing ever changed, as the logic tier is concerned. This swapping out of layers can be done at any level. So, if the presentation layer needed to change, as before, the other two layers would not care. Now that we finally have an idea of what multi-tier is, one of the most important things to consider about this way of thinking is the portability that it provides. A major advantage provided by portability is the ability to have not only a logical separation of the different layers but a physical separation as well; each independent layer can exist on different machines. The physical isolation is where we draw the major line between n-tier and what is known as MVC (model-view-controller – we will discuss this in a later blog…) Now that you have discovered n-tier, an entirely new world of software architecture has been opened in front of you (well, hopefully :-))
Extraction, Transformation and Loading
Depending on the source systems and the type of data basis, the process of loading data into the SAP BW is technically supported in different ways. In the conception phase, the system firstly needs to detect the different data sources in order to be able to transform the data with the suitable tool afterwards. Data Basis Additional heterogeneous data can be loaded alongside the original mySAP.com components that provide data via extractors: Flat files: A flat file in ASCII or CSV format can automatically be read by the SAP BW standard. Data providers: Providers such as Dun & Bradstreet and AC Nielsen US provide data which already has an import-friendly format. XML: XML data can also be processed in SAP BW. Data Staging Tools DB Connect: Allows relational databases to be accessed directly. Here, SAP DB MultiConnect is used to create a connection to the database management system (DBMS) in the external database. By importing metadata and original data, the necessary structures can be generated in SAP BW, and the data can be loaded without problem. ETL tools (for example, DataStage): In heterogeneous system landscapes, an important requirement is that the different data structures and content are consolidated before being loaded into SAP BW. You can use an ETL tool such as Ascential DataStage to load data from heterogeneous systems, such as Siebel and PeopleSoft, transform this data into a single format and then load it via a Business Programming Interface into SAP BW. UD Connect: Using UD Connect, you can access just about all relational and multi-dimensional data sources. UD Connect transfers the data as flat data. Multi-dimensional data is converted to a flat format when UD Connect is used. Interfaces BW Service Application Programming Interface (SAPI): A SAPI is an SAP-internal component that is delivered as of Basis release 3.1i. Communication between mySAP Business Suite components and SAP BW takes place via this SAPI. BAPI: Like the SAPI, a BAPI is also used for the structured communication between SAP BW and external systems. Both data providers and ETL tools use this interface. FILE: SAP automatically supports automatic import of files in CSV or ASCII format for flat files as standard. Simple Object Access Protocol (SOAP): The SOAP RFC Service is used to read XML data and to store it in a delta queue in SAP BW. The data can then be processed further with a corresponding DataSource and SAPI. UD Connect (Universal Data Connect): For the connection to data sources, UD Connect can use the JCA-capable (J2EE Connector Architecture) BI Java Connectors that are available for various drivers, protocols and providers as resource adapters.