Web design is a constantly evolving field. Changes are based on both the new requirements that internet users have and on the technologies available to developers and designers.
Machine learning is far from an innovative concept (the term was coined back in 1959). Over the past few years, however, a lot of progress has been made in the area. An application of artificial intelligence (AI), ML focuses on systems that enable technology to evolve and adapt through the accumulation of data. No programming or human involvement is required for the process to occur.
When we think of clickbait, it's easy to assume it's the domain of Buzzfeed and less reputable publications that are striving to get your pageview so they can earn a bit of advertising revenue. A recent Stanford University study provides an interesting insight into how it's driving even the more reputable portions of journalism.
The study examines the impact data has on not only story popularity, but on the various elements that contribute to that popularity, such as the headline, in a number of newsrooms in both the United States and France.
When we started using GraphQL in our Node.js project, we had a struggle with writing tests. We read numerous blogs, searching for the best way. Unfortunately, there didn't seem to be one, so we made one ourselves. Here, we'll share our way of testing GraphQL queries.
First, we will set up everything needed for running tests:
Adding social login to a Spring application is now easy with Spring Security 5. It?s well explained in the official documentation, as well as IN some blogs like this one, which cover the basic requirements. In real-world applications, though, you?ll have some additional requirements, like registering new users or having a stateless backend. So, in this post, we?ll broadly discuss how to address the following requirements:
Users should be able to authenticate using their social accounts.
In the age of the ?personalized web experience,? authentication and user management is a given, and it?s easier than ever for businesses to tap into third-party authentication providers like Facebook, Twitter, and Google to secure their APIs, and identify users logged into their apps. OpenID Connect (OIDC) is a protocol for authenticating users. It lays out what an Identity Provider needs to provide in order to be considered ?OpenID Connect Certified? which makes it easier than ever to consume authentication as a service.
App and data integration have dominated the news headlines of late, and for good reason. Companies are urgently moving their data and operations to the cloud so they become faster, more agile, and more data-driven. CIOs are leading the charge as IT becomes the Data as a Service (DaaS) function, arming their company with the best data possible so decision-making and innovation continuously improve.
CIOs are challenged to provide the best data across the company, and that seemingly straightforward task has become more complex with the proliferation of SaaS applications, the surge in big data, the emergence of IoT, and the rise of mobile devices, all of which must be integrated so workers across an organization can use in-context data every day. IT?s integration backlog has exploded and they can?t possibly hire enough people to manage their integration projects the old-fashioned way ? by writing tons of custom code. Not only is the deployment too much work, but the cost of maintaining all of the integrations accelerates as well.
Hi, Spring fans! Welcome to another incredible installment of all that?s fit to tweet, blog, record, and print about Spring! It?s been an insane week! Since our last installment I was in Paris, FR, for the epic Devoxx FR conference where I spoke at a meetup hosted by ZenikaIT, gave a workshop on Reactive Cloud Native Java and co-presented a talk on Reactive Spring with the one-and-only Juergen Hoeller. I jumped off stage and ran to the airport to board a flight leaving 150 minutes later headed back to the US!
Now, you may have heard that Pivotal, the company that leads and/or at least actively invests in a good many open-source projects - including Spring, Cloud Foundry, Apache Tomcat, Reactor, JUnit, Kubernetes, Redis, Micrometer and so many others besides - listed on the New York Stock Exchange on the 20th of April, 2018: we?re a public company now!
Contributing features, reviewing changes, and deploying code is a day in the life of a developer. Today we are making these tasks easier and more efficient with an amazing Web IDE, more flexible pipelines, additional security testing, and so much more.
Web IDE Is Now Open Source and Generally Available
At GitLab, we want everyone to be able to contribute, whether you are working on your first commit and getting familiar with git, or an experienced developer reviewing a stack of changes. Setting up a local development environment, or needing to stash changes and switch branches locally, can add friction to the development process. Using the Web IDE you can change multiple files, preview Markdown, review the changes and commit directly all from a browser. You can even open the diff from a merge request and get a side by side view of the changes. The Web IDE is generally available in 10.7 and is now open source, so everyone can benefit.
The world is speeding up; people expect customized information and services immediately. In these tumultuous times, some companies are clinging to their legacy data infrastructure as a security blanket. However, traditional RDBMSs are just not able to provide the massive scales, edge distribution, and virtual or cloud deployments that are necessary for modern applications. In particular, there are three market drivers that spell the end of legacy systems: 5G, Internet of Things, and Machine Learning.
5G: Not an Evolution, a Revolution
There was a lot of hoopla surrounding 4G, and one could be forgiven for thinking that the transition to 5G would be similar. But make no mistake?the leap from 4G to 5G is much larger than from 3G. 5G requires network slicing, utilizing multiple edge deployments, and much lower latencies than 4G. With 5G, CSPs will massively expand application possibilities and capacity. However, these come with stringent requirements: for each call, the system has to know who the caller is, where they are, what the caller's policy is, if they have credit, and more, all in milliseconds. Legacy systems simply cannot keep up.
You've probably heard of parallel testing before, but the ability to multiply your testing power without multiplying your testing time seems something of a QA fantasy if you haven't tried it yet. But once you experience parallel testing, you'll never want to go back to life before it.
While there are probably hundreds of benefits from parallel testing, everyone will gain the value in different ways. In case you need to be convinced (or you need to convince your boss), we rounded up the top five advantages you can expect from parallel testing.
We've gotten everyone connected to SQL Server using Progress DataDirect's exclusive support for both NTLM and Kerberos authentication from Linux with Sqoop. Now, we plan to blow your minds with high-flying bulk insert performance into SQL Server using Sqoop's Generic JDBC Connector. Linux clients will get similar throughput to the Microsoft BCP tool.
So far, Cloudera and HortonWorks have been pointing shops to the high-performance DataDirect SQL Server JDBC driver to help load data volumes anywhere from 10GB to 1TB into SQL Server data marts and warehouses. It's common for the DataDirect SQL Server JDBC driver to speed up load times by 15-20X, and Sqoop will see similar improvement since it leverages JDBC batches that we transparently convert into SQL Server's native bulk load protocol. Moving data out of Hadoop and into external JDBC sources are exciting projects that represent the democratization of big data for downstream application consumers. You're definitely doing something right if you are ready to read on!
This post is the fifth post of my Introduction to Eclipse Vert.x series. In the last post, we saw how Vert.x can interact with a database. To tame the asynchronous nature of Vert.x, we used Future objects. In this post, we are going to see another way to manage asynchronous code: reactive programming. We will see how Vert.x combined with Reactive eXtensions gives you superpowers.
Let's start by refreshing our memory with the previous posts:
AI is increasingly being used to guide and inform policing, whether it's to predict recidivism rates or guide police forces on the most effective ways to utilize their resources. Fears persist, however, that these algorithms hard code the various racial biases that already exist in policing today.
It's a fear that a recent study from UCLA and Louisiana State University suggests is overblown. The study, which is believed to be the first to use real-time field data from a deployment of predictive policing in Los Angeles, found that no such increase in biased arrests occurred.
There are many situations where you need to express that something is "optional" - an object that might contain a value or not. You have several options to implement in such cases, but with C++17 the most helpful way is probably std::optional.
For today I've prepared one refactoring case where you can learn how to apply this new C++17 feature.
One of the best features of CouchDB is its change feed, which allows us to get a feed of the changes happening on our database. It's also possible to have a serverless function (examples are for IBM Cloud Functions but should also work for Apache OpenWhisk) that fires in response to activity on that change feed. I have a database of events that I want to react to when they happen, but the change feed doesn't include the actual document that was added ? but you can add one of the built-in functions to a sequence to make that happen. This post will show how to achieve this by wiring up a built-in action to fetch the document with another action of our own that then handles the data.
The Database to Connect To
If you already have a database you want to use, skip to the next section. If not, here's where we set up a Cloudant (hosted Apache CouchDB) database on IBM Cloud.
I?m taking time to showcase any API I come across who have published their OpenAPI definitions to GitHub like New York Times, Box, Stripe, SendGrid, Nexmo, and others have. I?m also taking the time to publish stories showcasing any API provider who similarly publishes Postman Collections as part of their API documentation. Next up on my list is the Triathlon API, who provides a pretty sophisticated API stack for managing triathlons around the world, complete with a list of Postman Collections for exploring and getting up and running with their API.
As application development teams adopt a cloud-native model, they use container platforms to deploy apps as independent but interoperable microservices within a broad microservices architecture. Being portable, easily duplicated and scaled, containers promote DevOps efficiencies that free developers to focus more on creating value for end users.
Almost 85% of professionals in an Intel Security survey reported storing some or all of their sensitive data in the public cloud (1). In such situations, to protect against unauthorized access by both outsiders breaking into cloud systems and operators inside the cloud vendor organization, app data security must involve pervasive encryption.
Some people think that being a Scrum Master is a part-time job and that you can also be a developer during a project or that a Product Owner can share the Scrum Master role as well. Well, the Scrum Guide does not specifically rule this scenario out, but it is not ideal and especially if you are new to Scrum, I don't recommend it. The simple reason is that the Scrum Master's role is more than just being a facilitator during the Scrum Events.
Being a developer on its own is a full time job. There are problems to solve, requirements to understand, discussions to be had. Adding on top of that the role of Scrum Master, where the Scrum Master is there to help support the team and shield the team from trouble, just adds more stress and complexity to the situation. It makes it harder to know which to concentrate more on. Even being a developer on another project still takes away from the Scrum Master duties. What happens is that one of the roles wins out. Usually (but not always) it is the development role as deadlines and pressure increases and the Scrum "stuff" gets ignored.
Change with evolution is an inevitable cycle that enterprises need to adopt for growth and sustainability. Let's look at the growing popularity and demand for digital assistants such as Alexa and Siri, and how software giants are working towards making their products much more responsive and conversational. The AI platform and supporting applications continue to evolve and develop further to accommodate varying needs and protocols. Likewise, software quality assurance has to go through an ever-evolving process to deliver various aspects, right from building much more compatible applications, to confirming with different security protocols. While this holds true, what are the key reasons for QA to mature and progress continuously?
Why Software Development and Testing Needs to Keep Evolving
Neither QA nor development can recite a monologue and deliver adequately. The transition of roles and evolution of processes can happen only when both the functions collaborate and share an average of their experiences. In the current context, businesses are struggling with diverse protocols and regulations to align their business processes and applications with information security policies such as GDPR in Europe. Similarly, there are many policies that are influencing the adoption of QA processes and development patterns. Complying with all these policies requires intense communication and coordination amongst teams.
As the saying goes, "You don?t know what you have until it?s gone," and that?s an expensive lesson to learn in many cases. In the world of business, the same sentiment applies. We don?t have to look too hard for evidence of where an IT project has crashed on the rocks of a major overhaul or replacement system project.
As recently as this week, the financial services industry bore witness to yet another IT system failure. As reported in Finextra, customers of UK bank TSB reported issues with bank?s online and mobile channels as a result of migration from its so-called legacy IT platform (from former owners Lloyds) to the Proteo4 system from new owner Banco Sabadell.
Based on IDC research, the CAPEX-OPEX drive has created an environment whereby in 2018, the typical IT department will have a minority of their apps and platforms residing in on-premises data centers.
The traditional mode of capacity planning focuses on obtaining servers funded by applications able to achieve capital investment. Application groups had to obtain the capital needed to fund compute resources to operate the application. CAPEX-OPEX simplified is like choosing between purchasing a car on full payment with yearly depreciation benefits, or leasing a vehicle on a monthly cost with some limits on miles.
One of the hardest parts when it comes to implementing new ideas and features through the lifecycle of a project is making sure that the specific feature or the new framework that will be added will play an essential part in the project's success.
By implementing features that are hard to implement and are not as essential to the customer, or by adding tools to your stack which require a certain expertise and time to invest in learning, but have limited use for this project, we risk the project's success and our customer's satisfaction.
Foundry is Redgate's R&D division. Part of our remit is to look further into the future. We do that by keeping an eye on things that might indicate a change in what you need from our products (e.g. a new problem), or a development in technology (e.g. a better way of solving an existing problem). We want to develop a clearer understanding of the future of database DevOps.
In this post, we're reflecting our assessment of the status of DevOps for the database, giving you a behind-the-scenes look at how we've arrived at this assessment and setting out our direction.
Many many thanks to all of the reviewers who took the time to give feedback and for Red Hat for sponsoring my time especially Burr Sutter and the talented folks at O'Reilly who helped coordinate the effort and make it come to fruition.
Serverless and containers changed the way we leverage public clouds and how we write, deploy, and maintain applications. A great way to combine the two paradigms is to build a voice assistant with Alexa based on Lambda functions ? written in Go ? to deploy a Docker Swarm cluster on AWS.
The figure below shows all components needed to deploy a production-ready Swarm cluster on AWS with Alexa.