The modern world revolves around a network of connected things. We?re constantly using or being monitored by a variety of different technologies, many of which are powered by IoT SIM Cards. It?s now very clear that the internet of things (IoT) is becoming more and more prominent, not only in consumer tech, but in business too.
What Are IoT SIM Cards?
IoT SIM cards (otherwise known as M2M SIMs) provide the connectivity required to automate processes. When we want two devices to communicate, they must do so using a data connection. In most cases, it wouldn?t be wise to utilize a standard cellular mobile SIM; this form of connection doesn?t provide the critical functionalities that IoT SIM cards provide. But what are these functionalities?
Whenever you execute a query, a short message is returned to the client with the number of rows that are affected by that T-SQL statement. When you use SET NOCOUNT ON, this message is not sent. This can improve performance by reducing network traffic slightly. It is best to use SET NOCOUNT ON in SQL Server triggers and stored procedures unless one or more of the applications using the stored procedures require it to be OFF because they are reading the value in the message. SET NOCOUNT ON doesn't affect the result that is returned. It only suppresses the extra packet of message information, which is otherwise sent back to the client as a small (nine-byte) message packet called DONE_IN_PROC for each statement executed. The server-based logic, and values such as @@ROWCOUNT, are all unaffected.
When I visualize building an application, I would think of using React and Redux on the front-end which talks to a set of RESTful services built with Node and Hapi (or Express). However, over a period of time, I've realized that this approach does not scale well when you add new features to the front-end. For example, consider a page that displays user information along with courses that a user has enrolled in. At a later point, you decide to add a section that displays popular book titles that one can view and purchase. If every entity is considered as a microservice, then to get data from three different microservices would require three HTTP requests to be sent by the front-end app. The performance of the app would degrade with the increase in the number of HTTP requests.
I read about GraphQL and knew that it was an ideal way of building an app and I need not look for anything else. The GraphQL layer can be viewed as a façade which sits on top of your RESTful services or a layer which can be used to talk directly to the persistent layer. This layer provides an interface which allows its clients (front-end apps) to query the required fields from various entities in one single HTTP request.
Let's learn the basics of microservices and microservices architectures. We will also start looking at a basic implementation of a microservice with Spring Boot. We will create a couple of microservices and get them to talk to each other using Eureka Naming Server and Ribbon for Client Side Load Balancing.
This is part 2 of this series. In this part, we will focus on creating the Forex Microservice.
NativeScript Sidekick is all about making your life developing cross-platform, native mobile apps even easier. And what better way to simplify app development than to provide easy-to-use (and nice looking) starter kits for you to bootstrap your app development process?
This post is part of our "Week of NativeScript Sidekick" that goes into the how-to of each major Sidekick feature. We are starting the week off with the starter kits, of course!
The concept of Serverless Computing, also called Functions as a Service (FaaS) is fast becoming a trend in software development. This blog post will highlight steps and best practices for integrating Split feature flags into a serverless environment.
A Quick Look Into Serverless Architecture
Serverless architectures enable you to add custom logic to other provider services, or to break up your system (or just a part of it) into a set of event-driven stateless functions that will execute on a certain trigger, perform some processing, and act on the result - either sending it to the next function in the pipeline, or by returning it as result of a request, or by storing it in a database. One interesting use case for FaaS is image processing where there is a need to validate the data before storing it in a database, retrieving assets from an S3 bucket, etc.
If you work with big data or artificial intelligence at all, then you're dealing with data science. Data science is all about extracting meaning from data ? which is the whole point of having data in the first place, right?
In this post, we'll take a look at data science from a variety of angles so that you can understand it whether you're a data science aficionado or are putting your data science lab coat on for the first time. We'll look at some of the most popular and helpful data science articles on DZone, some outside resources on the topic, and some DZone publications that can help you learn more.
Global Study Identifies Existing Organizational Culture as a Key Hurdle for Companies to Overcome in Order to Thrive in Digital Economy
Thanks to Chris Wysopal, CTO at Veracode, now part of CA Technologies for discussing the results following the second phase of a global survey of more than 1,200 IT leaders around the topic of secure software development. Conducted by IT industry analyst firm Freeform Dynamics, the new report highlights the influence of an organization?s culture on its ability to integrate security practices into their software development initiatives, a practice and approach commonly known as DevSecOps.
According to Chris, CEOs will need to say "security is job one" if it will ever take precedence over speed to market. Headlines are not turning into action.
First, we have the problem of object classification, in which we have an image and want to know if this image contains any of particular categories, i.e. the image contains a car vs. the image doesn?t contain a car.
Recursion is a technique that allows us to break down a problem into smaller pieces. This technique allows us to remove some local side effects that we perform while writing looping structures and also makes our code more expressive and readable. In this post, we will see why it is a very useful technique in functional programming and how it can help us.
In the function that we wrote in the previous post:
Big data is a general term that is used to refer to all kinds of data that exist in the current world of business. From digital data and records for healthcare facilities to the massive paperwork in government agencies which is archived for future references, technology has given us a service-oriented architecture to analyze such information for our own good. Big data can never be categorized under one description or definition because experts are still devising ways through which more benefits can be derived from it. The beautiful thing about information technology is that it consistently evolves and is always available for companies that are willing to embrace it. On the other hand, the development of cloud computing made it easier for business enterprises to get technology in packages that are affordable. The cost of storing company information was significantly reduced by use of cloud computing, which also came with multiple applications that can be utilized by small business enterprises.
Since the birth of the internet, there has been an explosion of a wide array of information in the world wide web as cloud computing continues to develop steadily. Both the standard users and digital marketers can now generate loads of information about the consumer using social media marketing platforms on a daily basis. Sometimes, it can be quite an uphill task for institutions and business organizations to manage the amount of data that they generate and store on a daily basis. For instance, 2.5 quintillion bytes of data is being created daily ? which may present a storage and sorting challenge to cloud computing.
Recently Speedometer 2.0 has been released by WebKit team at Apple which helps developers and testers to test the web responsiveness by simulating the To-Do app using various frameworks. In this blog post, I am going to share my experiments with Speedometer 2.0.
Speedometer is a simple web application which helps to benchmark the web apps' responsiveness. It works great in Internet Explorer as well as a variety of other browsers. To start testing, visit http://browserbench.org/Speedometer2.0/
If you are new to Agile, it is a process that starts at the beginning of the project in most of the software testing companies with continuous integration between application development and software testing. Along with the incrementing development, the QA process is initiated parallel to the development phase.
Serverless is one of the developer world?s most popular misnomers. Contrary to its name, serverless computing does in fact use servers, but the benefit is that you can worry less about maintenance, scale, and configuration. This is because serverless is a cloud computing execution model where a cloud provider dynamically manages the allocation of machine and computational resources. You are basically deploying code to an environment without visible processes, operating systems, servers, or virtual machines. From a pricing perspective, you are typically charged for the actual amount of resources consumed and not by pre-purchased capacity.
Chatbots have slowly permeated our lives, and what was once considered as a thing of luxury has become an essential element for businesses. With so many non-coding and coding chatbot builders available, why should you consider IBM Watson? How does it make your chatbot more capable and effective in delivering the intended service?
Natural Language Processing
Adopting advanced natural language processing (NLP) capabilities, IBM Watson makes you chatbot capable of understanding human intent and needs. Thus, your chatbot can deliver a response to queries like a human. What are the techniques involved in Watson NLP that a chatbot developer should know to build an AI chatbot?
Ethereum is a programmable blockchain that allows users to create their own operations. This Refcard highlights fundamental information on Ethereum Blockchain and demonstrates the steps to get a private blockchain up and running. By the end, you will be able to set up two running nodes on one local machine.
I recently had a discussion with a friend, who is a relatively junior but very smart software developer. She asked me about exception handling. The questions were pointing to a tips-and-tricks kind of path and there is definitely a list of them. But I am a believer in context and motivation behind the way we write software so I decided to write my thoughts on exceptions from such a perspective.
Exceptions in programming (using Java as a stage for our story) are used to notify us that a problem occurred during the execution of our code. Exceptions are a special category of classes. What makes them special is that they extend the Exception class which in turn extends the Throwable class. Being implementations of Throwable allows us to "throw" them when necessary. So, how can an exception happen? Instances of exception classes are thrown either from the JVM or in a section of code using the throw statement. That is the how, but why?
In our simplified case, we would like to track assets throughout the supply chain. In reality, the data structures for the assets might be more complicated, but we will focus on a few aspects only: name, description, and manufacturer. It should be guaranteed that there is only a single instance of an asset at any time which leads to a uniqueness requirement of the ID. We would like to keep track of origin and history of the asset. The key actions that should be supported are the creation of new assets as well as transfers of assets. Furthermore, there should be the possibility to check whether a person is actually in the possession of the asset ownership certificate or not.
I was doing some work on the AWS API Gateway, and as I was going through their API documentation I found some of the OpenAPI vendor extensions they use as part of operations. These vendor extensions show up in the OpenAPI you export for any API and reflect how AWS has extended the OpenAPI specification, making sure it does what they need it to do as part of AWS API Gateway operations.
AWS has 20 separate OpenAPI vendor extensions as part of the OpenAPI specification for any API you manage using their gateway solution:
You may be familiar with MVC patterns like Java Swing, manipulating UI components directly to handle events and setting their properties. However, you may admire how Angular MVVM saves you from operating in the DOM. Although that's great, you still need to manage to transfer data between a client and a server. ZK, on the other hand, supports data binding with Java objects directly and handles data transmission between a client and a server transparent to you. Let me demonstrate you with a simple application.
MVVM (Model-View-ViewModel) Pattern
MVVM (Model-View-ViewModel) is a design pattern created by John Gossman for WPF. This pattern divides an application into 3 roles: View, Model, and ViewModel.
Using a Continuous Integration system is a very convenient way to organize the process of building, testing and delivering software. Jenkins has many plugins that enhance its usability, and one of these is the Jenkins Performance Plugin. The Jenkins Performance Plugin allows users to run tests using popular open source load testing tools, get reports from them and analyze graphic charts. This ability is very important for testing the stability of applications.
In this blog post, we will review how to use Jenkins with this Performance Plugin. You will learn how to organize performance testing in each software build, so you can better understand if your application is stable under a load. Running performance tests in each build can help us determine if recent changes are causing problems, if there is a more gradual degradation of system performance or if your system is able to handle its traffic load optimally. This plugin is managed, maintained and evolved by BlazeMeter?s Andrey Pokhilko.
Cloud computing is a relatively new platform with tremendous and exciting potential to revolutionize almost every tangible aspect of human life. Though many people think of this tech as a safe way to store files and run applications via an internet connection, it also represents a much more significant development. This technology stands as a brand new and still developing paradigm shift in the power and significance of computers and how this will affect virtually every aspect of the human experience.
Revolution In Technology
Cloud computing companies provide Software as a Service (SaaS), Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and enable the increasingly significant application of the Internet of Everything (IoE). These resources ? combined with improving methods of utilizing huge reams of data and the almost scarily good applications of improving AI ? are poised to fundamentally change the way people interact with their devices and other machines, businesses, their environments and other human beings.
Implementing an Agile methodology was a critical success factor for the solutions developed by our team and for the continuous customer satisfaction increase ? indicated by a grades increase in satisfaction surveys, as well as spontaneous comments congratulating us. We develop solutions for a very specific and crowded niche in aeronautics industry, by the way.
Before introducing how Agile was implemented in our team, let me describe how our work environment was to contextualize it. In many articles about this subject, the authors state that Agile is not a silver bullet. I will also agree that that is true if you don't receive the support from upper management. On our case, the need to go Agile was identified by the team and the hole implementation process was bottom-up. We were lucky to have support from the upper management.
Using stub (or mock) objects and mock frameworks is a common approach for writing tests, but people starting to use them very rarely consider the difficulties they are going to face in a dynamic project where requirements change very often, and when the tests have to be maintained over time by other people in the team.
I recently wrote an article giving some insights on good and bad practices using mocks with the help of Java examples. In general, we often rely on libraries for the creation of stub (or mock) objects in Java. This article shows situations where the dependencies to such libraries can be omitted by using Java 8 lambda expressions. The code is based on the examples from my previous article, which makes the comparison of various approaches easier.
Deleting data from a table using T-SQL works quite a lot like the UPDATE statement.
How It Works
It works in the same way that you supply the statement DELETE and then the table name. You?re not going to specify columns in any way because deleting data is all about removing a row. If you just wanted to remove the values in a column, you would use the UPDATE statement. Because of this, the only other thing you need for a DELETE statement is the WHERE clause. Just like with the UPDATE statement, if you don?t supply a WHERE clause, then the DELETE statement will remove all data in the table. Be very careful about using this statement. Make sure you?ve always got a WHERE clause.
As the name suggests, partial functions are only partial implementations. They do not cover every possible scenario of incoming parameters. A partial function caters to only a subset of possible data for which it has been defined. In order to assist developers, if the partial function is defined for a given input, Scala's PartialFunctiontrait provides the isDefinedAtmethod. The isDefinedAtmethod can be queried if it can handle a given value.
Partial functions in Scala can be defined by using the casestatement. Let us define a simple partial function, squareRoot. The function would take in a double input parameter and would return the square root.
A time series can be defined as a sequence of measurements taken over time at a regular interval (most often). Another important aspect of time series is its ordering, as it is highly dependent on the way it's been ordered. And because of its dependency, changing the order could also change the meaning of the data.
The theory for time series is based on the assumption of second-order stationarity. Real-life data are often not stationary; they exhibit a linear trend over time, or they have a seasonal effect.