Lately, I've been thinking about how easy it is to fall into the trap of not challenging our ideas about the code we're working on. In order to challenge the default mindset of Clean Code, I recently proposed to institute Dirty Code Monday (a proposal that sort of got me into a bit of a big discussion).
Anyway, here is the report from the first successful Dirty Code Monday one week ago:
Test how your application will react when many users access it. When building your application, you probably test your application in a lot of ways, such as unit testing or simply just running the application and checking if it does remotely what you expect it to do. If this succeeds you are happy. Hooray for you, it works for one person!
Of course, you?re not in the business of making web applications that will only be used by just one person. Your app is going to be a success with millions of users. Can your application handle it? I don?t know, you don?t know?nobody knows! Read the rest of this blog post to see how we tested an application we are working on at Luminis and found out for ourselves.
Early last year, a Microsoft research project dubbed DeepCoder announced that it had made progress creating AI that could write its own programs.
Such a feat has long captured the imagination of technology optimists and pessimists alike, who might consider software that creates its own software as the next paradigm in technology ? or perhaps the direct route to building the evil Skynet.
An earlier post included code for multiplying quaternions, octonions, and sedenions. The code was a little clunky, so I refactor it here.
xstar = -x
xstar *= -1
def CayleyDickson(x, y):
n = len(x)
if n == 1:
m = n // 2
a, b = x[:m], x[m:]
c, d = y[:m], y[m:]
z = np.zeros(n)
z[:m] = CayleyDickson(a, c) - CayleyDickson(conj(d), b)
z[m:] = CayleyDickson(d, a) + CayleyDickson(b, conj(c))
The CayleyDickson function implements the Cayley-Dickson construction and can be used to multiply real, complex, quaternion, and octonion numbers. In fact, it can be used to implement multiplication in any real vector space of dimension 2. The numeric types listed above correspond to n = 0, 1, 2, and 3. These are the only normed division algebras over the reals.
If you're entranced in reading the full report, the DZone Community Survey, Vol. I is live and available here!
As part of our initiative to get to know our community better, we regularly conduct surveys on various topics. From the surveys, we get all kinds of interesting data on trends in the developer community, such as most popular languages, favorite and/or interesting tools, and the like. In recent surveys, we've noticed that two of the fastest growing subfields in development are Big Data and Cloud.
I gave a talk early in June at POST/CON 2018 in San Francisco. The conference was a great mix of discussions reflecting the Postman community. You can find all the talks on Google, including mine about moving towards a modern API lifecycle.
To gather insights on the current and future state of the cloud, we talked to IT executives from 33 companies about their, and their clients?, use of the cloud. We asked, "What do you see as the most important elements of the cloud?" Here's what they told us:
On-demand provisioning and scalability.
Business agility and flexibility. Customers can start up and scale applications as their businesses grow. However, economics may vary significantly, particularly when multi-cloud scenarios are considered.
Innovation cycles the vendors are providing customers. The ability to rapidly scale up and down to meet customer demands geographically. How to get more velocity out of our team to reduce time to implement features. Focus on AWS.
Focused on IoT using video IoT and other data to make the product superior to on-premise solutions. Business optimization as scale. Rapid deployment. Continuous updates. Leveraging latest AI/ML technologies.
From our perspective, the most important elements include high-availability, scalability, and maintenance. Using the cloud, as an active-active architecture is much easier to implement than if an organization were to try to set up active-active using traditional deployment models. The cloud also makes bursting to scale much easier as well. Granted, an organization could deploy an infrastructure that could handle bursting to scale, but with the cloud, the resources and their management are much easier. For our customers, the maintenance of an enterprise application is completely negated because we do all the patching and upgrades for them.
Easy machine availability -> Scalability, flexibility - getting computing resource quickly - for example, if I need a 16 CPU machine, I can just go into my resources and launch what I need immediately. Managed DB service -> scalability, automated backup tools. Server abstraction using serverless lambda functions and API Gateway.
Simplicity and reliability ? always on. AWS is like shopping on Amazon.
We see the following four elements as the important elements of the cloud: 1) Pay per use, on demand. 2) Near unlimited availability. 3) Availability of specialized resources (GPUs, big memory etc.). 4) Smaller companies often don?t have the capacity for power, cooling or physical space for additional servers.
1) Availability 99.9999%. Can I trust a cloud provider? How do you build the confidence? 2) Security and privacy. Clarity on data ownership.
When hosted in the cloud, you are no longer in control of your physical environment. Putting data there is fine to reduce labor cost and physical data server footprint. Security is now in someone else?s hands. Don?t go for the cheapest provider. Put a liability agreement in place.
[Security is] someone else?s problem and you get geographic redundancy. Provided backups are encrypted, you have a good level of security. How to deal with large datasets. First full back up is copied to a hard drive and then shipped. Take data and move onto storage.
The networking needs to be a foundational element. Get architecture correct to build off of. Get the landing zone correct. How to organize the account along with virtual private clouds. The value of automation as they move to the cloud. System-wide view of what?s going on. Have a bar for skillsets for the members of your cloud team. Public cloud is a different type of environment. Networking tends to be quite difficult. Good public cloud education. Five steps: 1) secure connectivity; 2) on-prem connectivity is agile; 3) automation; 4) segmentation; 5) visibility. Software-defined cloud routing allows for centralization and automation.
Backup and security. AWS is to blame that the cloud is redundant. 2018 State of AWS Protection Report shows how they use AWS and how to protect data within it. Production environment and workload already in the cloud and 60% perform no backups and fewer than 10% have a recovery plan in place. Codespaces hacked in 2014. Deleted environment and out of business overnight. We protect critical client on AWS like Notre Dame, Harvard.
The opportunity to convert traditional capital expenditures for infrastructure into operational expenditure, with higher flexibility and near-instant provisioning via APIs is extremely attractive to us. Aside from a couple racks of servers for esoteric platforms, we do not maintain or own any infrastructure. It is all in the cloud. This allows for predictability in cost control.
Most customers are looking where public cloud fits. There are five stages that need to be evaluated continuously: 1) Figure out the associated costs, gap analysis on-prem versus public cloud. For on-prem consider peak time plus 2X or 3X. Understand how the application performs in the infrastructure. Use automation to burst. 2) Don?t do a proof of concept focus on production workload. Build a cross-functional team to move the workload to the cloud. Cloud is transformational and impacts more than the IT department. Cultural change requires everyone?s buy-in. Forces you to think about security up front. Create a foundation for success. 3) Evolution of the gap analysis. Learn skills needed on the team, and the process for moving from on-prem to cloud. Tools needed. Learn all applications and dependencies. If you don?t understand traffic patterns you won?t understand security and costs. Move groups for security, compliance and reduce costs. Not all apps can move to the cloud in the current state. Create an application roadmap on which apps can move and how to do. Forcing customers to buy holistically versus independently. Take an ecosystem approach to solving the problem. 4) You have a plan. How to execute. Avoid doing things manually. Understand infrastructure as code and DevOps. Extend operational guidelines into the cloud. Build scripts and templates. Build configuration management and deployment to remove human error. 5) Optimization. Ongoing real-time insights into usage, performance, and spend for end-user experience and to control costs. Hold lines of business accountable with chargebacks.
Certain things are commodity don?t sweat the details. Memory is cheap. Ability to run an SQL server. Focus on cost of migration, support. How easy to transfer and migrate? If you are running a business with a lot of computation and analysis who has the best image processing systems? Look at differences based on business needs.
In our view ? and this certainly carries over to our own product design ? simplicity, predictable and competitive pricing, and scale are the most important elements of the cloud. While some cloud providers are becoming more complex in what they offer, we?ve focused on keeping our products simple to use and flexible, so there is no lock in and developers can use things however they wish. Learning and documentation are also very important to developers building applications in the cloud and we do our best to serve them by making learning easy and accessible to everyone.
Web Assembly: A Game Changer
It's time for us to admit what we have all known is true for a long time; NoSQL is the wrong tool for many of the modern application use cases, and it's time that we move on.
NoSQL came into existence because the databases at the time couldn't handle the scale required. The rise of this new generation of data services solved many of the problems of web scale and rapidly growing data sets when it was created more than a decade ago. NoSQL also offered a new, cost-effective method for cold storage/occasional batch access for petabyte-scale data. However, in the rush to solve for the challenges of big data and large numbers of concurrent users, NoSQL abandoned some of the core features of databases that make them highly performant and easy to use.
Enterprise Application Integration (EAI) is a complex problem to solve and different software vendors have produced different types of software products like ESBs, application servers, message brokers, API gateways, load balancers, proxy servers, and many others. These products have evolved from a monolithic, heavyweight, high-performing runtimes to lean, modularized, micro-runtimes. Microservices Architecture (MSA) is having a major impact on the way architects design their enterprise software systems. The requirements ten years ago have drastically changed due to modern advancements in MSA, containers, DevOps, agility and crazy customer demands.
If you came here to understand new EIP patterns, this is not the right place. In reality, the 65 EIP patterns introduced by Gregor Hohpe are still in action with a few more additional patterns coming through. Refer to this link to understand the EIP patterns you can still use with the modern architecture.
2018 marks the 20th anniversary of open source. As we take a look back on the history of free and shareable software, we see that its evolution over the past two decades has produced many groundbreaking applications, paving the way for a free and open future.
Microsoft?s recent acquisition of GitHub for $7.5 billion provides firm validation of how valuable open source technology has become. When one of the most significantly proprietary/closed technology companies investing such a large amount of money in the number one world-leading open source collaboration tool ever developed, this signals to all the proprietary software vendors out there that times are changing and customers expect a different experience.
As a security company, there?s a lot of pressure to keep our data secure, while still moving fast and innovating product development. I find the intersection of security and speed to be the most interesting challenge as an infrastructure security professional. The unique thing about Threat Stack is that our security and engineering teams are learning how to work together to automate security into our day-to-day processes. The goal is to make them simultaneously more secure, efficient, and effective.
I?m a firm believer that an effective SecOps organization involves people, processes, and tools ? in that order. The tools we?ve built in-house are meant to make people?s lives easier and ease some of the processes that make security a natural part of the workflow if you?re trying to get a job done quickly.
It was quite difficult to name this article. Usually, I try to find a title that more or less describes a search term that I used when I was looking for information on the topic at hand, but I could not really find what I was looking for. What I have here is code that calculates locations for objects to be in front or on top of other objects and/or the spatial mesh. For this project, I use BoxCastAll, something I have tried to use before, but not very successfully. I have tried using Rigidbody.SweepTest, and although it works for some scenarios, it did not work for all. My floating info screens ended up half a mountain away (in Walk the World), or the airport could not move over the ground because of some tiny obstacle blocking it (in AMS HoloATC). So, I tried a new approach.
This is part one of a two-post blog post. In this post, I will explain how the BoxCast works and what extra tricks and calculations were necessary to get it to work properly.
Earlier this Spring, Oracle and other major vendors announced some of the early plans for Jakarta EE (formerly known as Java EE) in its new home with Eclipse Foundation. And, a survey of nearly 2,000 Java developers revealed that ?Cloud Native? capabilities were the top requirement for the platform?s evolution. For our series focused on cloud trends for developers, DZone caught up with Mike MIlinkovich, Executive Director at Eclipse Foundation, to learn more about what the cloud-native future holds for Jakarta EE.
DZone: Taking the reins on the run-time for 10M Java developers is a pretty huge undertaking. What does that mean for Eclipse Foundation?
The US Clarifying Lawful Overseas Use of Data (CLOUD) Act was quietly enacted into law on March 23, 2018. I say quietly due to the controversial nature of how it was passed ? snuck into the back of a 2,300-page Federal spending bill on the eve of Congress' vote. While debate rages on about both the way the bill was passed, and about the wide latitude the Act gives to the President and the State Department, the fact remains that it has been signed into law, and organizations need to start planning how to respond. For many, both in the US and abroad, that planning has drawn increased interest in Cloud Access Security Brokers (CASBs), and specifically, in cloud encryption.
The CLOUD Act is meant to expedite law enforcement access to online/cloud data, specifically when that data is stored abroad. CLOUD is an update to the Electronic Communications Privacy Act (ECPA), which was passed in 1986, long before cloud was even a twinkle in any entrepreneur's eyes. Under ECPA, the only way for the US and a foreign government to exchange such data was under a Mutual Legal-Assistance Treaty (MLAT), which must be passed by a 2/3 vote of the Senate.
When I teach any sort of product/project/portfolio management, I ask, "Who believes multitasking works?" Always, at least several managers raise their hands. They believe multitasking works because they multitask all the time. Why? Because the managers have short work-time and long decision-wait time.
If you are a manager, your time for any given decision probably looks like this:
Once you recognize the need for a developer community, the next step is putting together a list of wants and needs for your community. We created the following list of things every great developer community needs.
Focus is key.
If I need my sink repaired, I'm calling a plumber, not an electrician who also fixes sinks. Find a company that knows developers and has a proven track record of creating spaces where developers can collaborate.
Xamarin.Forms allows us to create layouts in two ways: XAML and C# code. We are going to discover each layout in both ways. To simplify, we are going to create a new application named LayoutsAppand suppose our UI has two controls: one Label and one Button. Controls in Xamarin.Forms will be discovered in more detail in the next article.
"If testers are curious enough and they get in there and poke around and not just follow, in this case, the software testing test case, and they see connections, see different scenarios and things that have been come up with, that helps them know the product more, which makes them much better testers, too." -Adam Bertram
On this week's episode of Continuous Testing Live, Ingo Philipp and Adam "The Automator" Bertram share their thoughts on the increasing presence of test automation and DevOps in modern software delivery lifecycles. While there are certainly cries to "automate everything" to get the most ROI, there's one software testing practice that shouldn't be automated or ignored.
I import and work with a number of OpenAPI definitions that I come across in the wild. When I come across a version 1.2, 2.0, 3.0 OpenAPI, I import them into my API monitoring system for publishing as part of my research. After the initial import of any OpenAPI definition, the first thing I look for is consistency in the naming of paths, the availability of summary, descriptions, as well as tags. The naming conventions used for paths is all over the place, some are cleaner than others. Most have a summary, with fewer having descriptions, but I?d say about 80% of them do not have any tags available for each API path.
Tags for each API path are essential to labeling the value a resource delivers. I?m surprised that API providers don?t see the need for applying these tags. I?m guessing it is because they don?t have to work with many external APIs, and really haven?t put much thought into other people working with their OpenAPI definition beyond it just driving their own documentation. Many people still see OpenAPI as simply a driver of API documentation on their portal, and not as an API discovery, or complete lifecycle solution that is portable beyond their platform. Not considering how tags applied to each API resource will help others index, categorize, and organize APIs based upon the value it delivers.
Let's build a non-trivial app with React and then refactor it to use Redux!
Much of the advice you get regarding the addition of Redux to your React projects is to only do so once they reach a certain size, because of the extra complexity Redux adds. That's certainly fair. But it will leave you with a bit of technical debt (refactoring to be done later) that you wouldn't have if you just started out with React and Redux.
Consequently, I thought it might be nice to present an exercise where we do just that: build an app as simply as possible using React and ReactDOM alone (not even JSX since you need more dependencies and a build process to support that), and then refactor to use JSX and Redux.
The idea of model fusions is pretty simple. You combine the predictions of a bunch of separate classifiers into a single, uber-classifier prediction, in theory, better than the predictions of its individual constituents.
As my colleague Teresa Álverez mentioned in a previous post, however, this doesn't typically lead to big gains in performance. We're typically talking 5-10% improvements even in the best case. In many cases, OptiML will find something as good or better than any combination you could try by hand.
Data has become a first-class asset for modern businesses, corporations, and organizations irrespective of their size and scale. Any intelligent system, regardless of its complexity, needs to be powered by data. At the heart of any intelligent system, we have one or more data insight algorithms based on some sort of means of learning from data, such as machine learning, deep learning, or statistical methods, which consume this data to gather knowledge and provide intelligent insights over a period of time. Algorithms are pretty generic by themselves and cannot work out of the box on plain, raw data. There is a need to extract meaningful features from raw data so that it can be understood and consumed.
Any intelligent data insight system basically consists of an end-to-end pipeline, from ingesting raw data to leveraging data processing techniques to wrangle, process, and engineer meaningful features and attributes from this data. Then we usually leverage techniques like statistical models or machine learning models to model on these features and then deploy this model if necessary for future usage based on the problem to be solved at hand. A typical standard machine learning pipeline based on the Cross-industry standard process for a data mining industry standard process model is depicted below.
Many inky, black pixels have been rendered over GDPR. It dramatically shifts the landscape for businesses with any EU users, so there are a lot of questions about what it means in general, as well as what it takes to actually comply with it.
In this post, we'll cover how Cockroach Labs conceives of GDPR's major tenants (known as Data Subject Rights, which translates to "things you must do for your users"), as well as some considerations as to what it actually means for your company's database.
Nowadays, for almost all services, we would like to set up at least poor man?s HA. This means we would have more than one node/server/pod at a time. This is great for load balancing and availability purposes. Nevertheless, there?s a simple problem with this setup. What if you want to execute a piece of code in only one node? We can do this via a simple mutual exclusion mechanism. There are various ways to this but I would like to limit this post to achieve mutual exclusion with Java and Spring Framework.
Now, imagine you have a cron job that sends an e-mail to your customers at 9 a.m. every day. You wouldn?t want to send the same e-mails twice, right? If you are using Spring Framework, you can simply do this via Scheduled annotation. A typical code would look like:
This Azure IoT reference implementation guide provides industrial equipment manufacturers with an accelerated, flexible path for delivering differentiated connected products to gain a competitive advantage through digital transformation with Microsoft Azure. This guide follows the best practices and patterns outlined in the official Microsoft Azure IoT Reference Architecture with additional domain-specific recommendations and in-depth walkthroughs based on Bright Wolf?s decade of experience designing and delivering industrial connected products for some of the largest companies in the world.
Collecting and transforming data from pumps, motors, filters, chillers, and other industrial equipment across manufacturing, oil and gas, cold chain transportation, healthcare, agriculture, and other verticals ? and integrating this data into enterprise business systems for generating insights and actions embedded inside your customers? operations ? presents a unique set of challenges for equipment manufacturers that don?t apply to other kinds of IoT projects. These include incorporating legacy devices already deployed in the field, achieving and maintaining regulatory compliance, and integrating securely with a wide variety of applications and business tools across multiple business units and product lines.