Categories
Computing

Node

Intro

Node.js is an open-sourcecross-platformJavaScript runtime environment that executes JavaScript code outside of a browser. Node.js lets developers use JavaScript to write command line tools and for server-side scripting—running scripts server-side to produce dynamic web page content before the page is sent to the user’s web browser. Consequently, Node.js represents a “JavaScript everywhere” paradigm, unifying web-application development around a single programming language, rather than different languages for server- and client-side scripts.

Though .js is the standard filename extension for JavaScript code, the name “Node.js” does not refer to a particular file in this context and is merely the name of the product. Node.js has an event-driven architecture capable of asynchronous I/O. These design choices aim to optimize throughput and scalability in web applications with many input/output operations, as well as for real-time Web applications (e.g., real-time communication programs and browser games).

The Node.js distributed development project, governed by the Node.js Foundation, is facilitated by the Linux Foundation‘s Collaborative Projects program

Overview

Node.js allows the creation of Web servers and networking tools using JavaScript and a collection of “modules” that handle various core functionalities. Modules are provided for file system I/O, networking (DNSHTTPTCPTLS/SSL, or UDP), binary data (buffers), cryptography functions, data streams, and other core functions.Node.js’s modules use an API designed to reduce the complexity of writing server applications.

JavaScript is the only language that Node.js supports natively, but many compile-to-JS languages are available. As a result, Node.js applications can be written in CoffeeScriptDartTypeScriptClojureScript and others.

Node.js is primarily used to build network programs such as Web servers.The most significant difference between Node.js and PHP is that most functions in PHP block until completion (commands only execute after previous commands finish), while Node.js functions are non-blocking (commands execute concurrently or even in parallel, and use callbacks to signal completion or failure).

Technical Details

Node.js is a JavaScript runtime environment that processes incoming requests in a loop, called the event loop.

Threading

Node.js operates on a single-thread event loop, using non-blocking I/O calls, allowing it to support tens of thousands of concurrent connections without incurring the cost of thread context switching. The design of sharing a single thread among all the requests that use the observer pattern is intended for building highly concurrent applications, where any function performing I/O must use a callback. To accommodate the single-threaded event loop, Node.js uses the libuv library—which, in turn, uses a fixed-sized thread pool that handles some of the non-blocking asynchronous I/O operations.

A thread pool handles the execution of parallel tasks in Node.js. The main thread function call posts tasks to the shared task queue, which threads in the thread pool pull and execute. Inherently non-blocking system functions such as networking translate to kernel-side non-blocking sockets, while inherently blocking system functions such as file I/O run in a blocking way on their own threads. When a thread in the thread pool completes a task, it informs the main thread of this, which in turn, wakes up and executes the registered callback.

A downside of this single-threaded approach is that Node.js doesn’t allow vertical scaling by increasing the number of CPU cores of the machine it is running on without using an additional module, such as cluster, StrongLoop Process Manager, or pm2. However, developers can increase the default number of threads in the libuv thread pool. The server operating system (OS) is likely to distribute these threads across multiple cores. Another problem is that long-lasting computations and other CPU-bound tasks freeze the entire event-loop until completion.

Node.js uses libuv to handle asynchronous events. Libuv is an abstraction layer for network and file system functionality on both Windows and POSIX-based systems such as Linux, macOS, OSS on NonStop, and Unix.

The core functionality of Node.js resides in a JavaScript library. The Node.js bindings, written in C++, connect these technologies to each other and to the operating system.

V8

V8 is the JavaScript execution engine which was initially built for Google Chrome. It was then open-sourced by Google in 2008. Written in C++, V8 compiles JavaScript source code to native machine code during runtime instead of interpreting it in ahead of time (AOT).

Package management

npm is the pre-installed package manager for the Node.js server platform. It installs Node.js programs from the npm registry, organizing the installation and management of third-party Node.js programs. Packages in the npm registry can range from simple helper libraries such as Lodash to task runners such as Grunt.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Javascript

JavaScript often abbreviated as JS, is a high-levelinterpreted scripting language that conforms to the ECMAScript specification. JavaScript has curly-bracket syntaxdynamic typingprototype-based object-orientation, and first-class functions.

Alongside HTML and CSS, JavaScript is one of the core technologies of the World Wide Web.[JavaScript enables interactive web pages and is an essential part of web applications. The vast majority of websites use it,  and major web browsers have a dedicated JavaScript engine to execute it.

As a multi-paradigm language, JavaScript supports event-drivenfunctional, and imperative (including object-oriented and prototype-basedprogramming styles. It has APIs for working with text, arrays, dates, regular expressions, and the DOM, but the language itself does not include any I/O, such as networkingstorage, or graphics facilities. It relies upon the host environment in which it is embedded to provide these features.

Initially only implemented client-side in web browsers, JavaScript engines are now embedded in many other types of host software, including server-side in web servers and databases, and in non-web programs such as word processors and PDF software, and in runtime environments that make JavaScript available for writing mobile and desktop applications, including desktop widgets.

The terms Vanilla JavaScript and Vanilla JS refer to JavaScript not extended by any frameworks or additional libraries. Scripts written in Vanilla JS are plain JavaScript code.

Although there are similarities between JavaScript and Java, including language name, syntax, and respective standard libraries, the two languages are distinct and differ greatly in design. JavaScript was influenced by programming languages such as Self and Scheme.[13] The JSON serialization format, used to store data structures in files or transmit them across networks, is based on JavaScript

Categories
Computing

Test Driven Development Cycle

1. Add a testIn test-driven development, each new feature begins with writing a test. Write a test that defines a function or improvements of a function, which should be very succinct. To write a test, the developer must clearly understand the feature’s specification and requirements. The developer can accomplish this through use cases and user stories to cover the requirements and exception conditions, and can write the test in whatever testing framework is appropriate to the software environment. It could be a modified version of an existing test. This is a differentiating feature of test-driven development versus writing unit tests after the code is written: it makes the developer focus on the requirements before writing the code, a subtle but important difference.2. Run all tests and see if the new test failsThis validates that the test harness is working correctly, shows that the new test does not pass without requiring new code because the required behavior already exists, and it rules out the possibility that the new test is flawed and will always pass. The new test should fail for the expected reason. This step increases the developer’s confidence in the new test.3. Write the codeThe next step is to write some code that causes the test to pass. The new code written at this stage is not perfect and may, for example, pass the test in an inelegant way. That is acceptable because it will be improved and honed in Step 5.At this point, the only purpose of the written code is to pass the test. The programmer must not write code that is beyond the functionality that the test checks.4. Run testsIf all test cases now pass, the programmer can be confident that the new code meets the test requirements, and does not break or degrade any existing features. If they do not, the new code must be adjusted until they do.5. Refactor codeThe growing code base must be cleaned up regularly during test-driven development. New code can be moved from where it was convenient for passing a test to where it more logically belongs. Duplication must be removed. Objectclassmodulevariable and method names should clearly represent their current purpose and use, as extra functionality is added. As features are added, method bodies can get longer and other objects larger. They benefit from being split and their parts carefully named to improve readability and maintainability, which will be increasingly valuable later in the software lifecycleInheritance hierarchies may be rearranged to be more logical and helpful, and perhaps to benefit from recognized design patterns. There are specific and general guidelines for refactoring and for creating clean code.[6][7] By continually re-running the test cases throughout each refactoring phase, the developer can be confident that process is not altering any existing functionality.The concept of removing duplication is an important aspect of any software design. In this case, however, it also applies to the removal of any duplication between the test code and the production code—for example magic numbers or strings repeated in both to make the test pass in Step 3.RepeatStarting with another new test, the cycle is then repeated to push forward the functionality. The size of the steps should always be small, with as few as 1 to 10 edits between each test run. If new code does not rapidly satisfy a new test, or other tests fail unexpectedly, the programmer should undo or revert in preference to excessive debuggingContinuous integration helps by providing revertible checkpoints. When using external libraries it is important not to make increments that are so small as to be effectively merely testing the library itself,[4] unless there is some reason to believe that the library is buggy or is not sufficiently feature-complete to serve all the needs of the software under development.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Devops

What It Is?

DevOps is a set of practices that combines software development (Dev) and information-technology operations (Ops) which aims to shorten the systems development life cycle and provide continuous delivery with high software quality.

Toolchains

As DevOps is intended to be a cross-functional mode of working, those that practice the methodology use different sets of tools—referred to as “toolchains“—rather than a single one.These toolchains are expected to fit into one or more of the following categories, reflective of key aspects of the development and delivery process:

  1. Coding – code development and review, source code management tools, code merging
  2. Building – continuous integration tools, build status
  3. Testing – continuous testing tools that provide quick and timely feedback on business risks
  4. Packaging – artifact repository, application pre-deployment staging
  5. Releasing – change management, release approvals, release automation
  6. Configuring – infrastructure configuration and management, infrastructure as code tools
  7. Monitoring – applications performance monitoring, end-user experience

Some categories are more essential in a DevOps toolchain than others; especially continuous integration (e.g. JenkinsGitlabBitbucket pipelines) and infrastructure as code (e.g., TerraformAnsiblePuppet).

Forsgren et al. found that IT performance is strongly correlated with DevOps practices like source code management and continuous delivery.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Data Analysis

Data analysis is a process of inspecting, cleansingtransforming and modeling data with the goal of discovering useful information, informing conclusion and supporting decision-making. Data analysis has multiple facets and approaches, encompassing diverse techniques under a variety of names, and is used in different business, science, and social science domains. In today’s business world, data analysis plays a role in making decisions more scientific and helping businesses operate more effectively.

Data mining is a particular data analysis technique that focuses on statistical modeling and knowledge discovery for predictive rather than purely descriptive purposes, while business intelligence covers data analysis that relies heavily on aggregation, focusing mainly on business information. In statistical applications, data analysis can be divided into descriptive statisticsexploratory data analysis (EDA), and confirmatory data analysis (CDA). EDA focuses on discovering new features in the data while CDA focuses on confirming or falsifying existing hypothesesPredictive analytics focuses on application of statistical models for predictive forecasting or classification, while text analytics applies statistical, linguistic, and structural techniques to extract and classify information from textual sources, a species of unstructured data. All of the above are varieties of data analysis.

Data integration is a precursor to data analysis,  and data analysis is closely linked to data visualization and data dissemination.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
History

History of Java

James Gosling, Mike Sheridan, and Patrick Naughton initiated the Java language project in June 1991. Java was originally designed for interactive television, but it was too advanced for the digital cable television industry at the time. The language was initially called Oak after an oak tree that stood outside Gosling’s office. Later the project went by the name Green and was finally renamed Java, from Java coffee. Gosling designed Java with a C/C++-style syntax that system and application programmers would find familiar.

Sun Microsystems released the first public implementation as Java 1.0 in 1996.  It promised Write Once, Run Anywhere (WORA), providing no-cost run-times on popular platforms. Fairly secure and featuring configurable security, it allowed network- and file-access restrictions. Major web browsers soon incorporated the ability to run Java applets within web pages, and Java quickly became popular. The Java 1.0 compiler was re-written in Java by Arthur van Hoff to comply strictly with the Java 1.0 language specification. With the advent of Java 2 (released initially as J2SE 1.2 in December 1998 – 1999), new versions had multiple configurations built for different types of platforms. J2EE included technologies and APIs for enterprise applications typically run in server environments, while J2ME featured APIs optimized for mobile applications. The desktop version was renamed J2SE. In 2006, for marketing purposes, Sun renamed new J2 versions as Java EEJava ME, and Java SE, respectively.

In 1997, Sun Microsystems approached the ISO/IEC JTC 1 standards body and later the Ecma International to formalize Java, but it soon withdrew from the process. Java remains a de facto standard, controlled through the Java Community Process.  At one time, Sun made most of its Java implementations available without charge, despite their proprietary software status. Sun generated revenue from Java through the selling of licenses for specialized products such as the Java Enterprise System.

On November 13, 2006, Sun released much of its Java virtual machine (JVM) as free and open-source software (FOSS), under the terms of the GNU General Public License (GPL). On May 8, 2007, Sun finished the process, making all of its JVM’s core code available under free software/open-source distribution terms, aside from a small portion of code to which Sun did not hold the copyright.[31]

Sun’s vice-president Rich Green said that Sun’s ideal role with regard to Java was as an evangelist.  Following Oracle Corporation‘s acquisition of Sun Microsystems in 2009–10, Oracle has described itself as the steward of Java technology with a relentless commitment to fostering a community of participation and transparency. This did not prevent Oracle from filing a lawsuit against Google shortly after that for using Java inside the Android SDK (see the Android section). Java software runs on everything from laptops to data centersgame consoles to scientific supercomputers. On April 2, 2010, James Gosling resigned from Oracle.

In January 2016, Oracle announced that Java run-time environments based on JDK 9 will discontinue the browser plugin.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Cobol

COBOL (“common business-oriented language”) is a compiled English-like computer programming language designed for business use. It is imperativeprocedural and, since 2002, object-oriented. COBOL is primarily used in business, finance, and administrative systems for companies and governments. COBOL is still widely used in legacy applications deployed on mainframe computers, such as large-scale batch and transaction processing jobs. But due to its declining popularity and the retirement of experienced COBOL programmers, programs are being migrated to new platforms, rewritten in modern languages or replaced with software packages. Most programming in COBOL is now purely to maintain existing applications.

COBOL was designed in 1959 by CODASYL and was partly based on previous programming language design work by Grace Hopper, commonly referred to as “the (grand)mother of COBOL”. It was created as part of a US Department of Defense effort to create a portable programming language for data processing. It was originally seen as a stopgap, but the Department of Defense promptly forced computer manufacturers to provide it, resulting in its widespread adoption. It was standardized in 1968 and has since been revised four times. Expansions include support for structured and object-oriented programming. The current standard is ISO/IEC 1989:2014.

COBOL statements have an English-like syntax, which was designed to be self-documenting and highly readable. However, it is verbose and uses over 300 reserved words. In contrast with modern, succinct syntax like y = x;, COBOL has a more English-like syntax (in this case, MOVE x TO y). COBOL code is split into four divisions (identification, environment, data and procedure) containing a rigid hierarchy of sections, paragraphs and sentences. Lacking a large standard library, the standard specifies 43 statements, 87 functions and just one class.

Academic computer scientists were generally uninterested in business applications when COBOL was created and were not involved in its design; it was (effectively) designed from the ground up as a computer language for business, with an emphasis on inputs and outputs, whose only data types were numbers and strings of text. COBOL has been criticized throughout its life, for its verbosity, design process, and poor support for structured programming. These weaknesses result in monolithic and, though intended to be English-like, not easily comprehensible and verbose programs.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Stack trace

In computing, a stack trace (also called stack backtrace or stack traceback) is a report of the active stack frames at a certain point in time during the execution of a program. When a program is run, memory is often dynamically allocated in two places; the stack and the heap. Memory is continuously allocated on a stack but not on a heap, thus reflective of their names. Stack also refers to a programming construct, thus to differentiate it, this stack is referred to as the program’s runtime stack. Technically, once a block of memory has been allocated on the stack, it cannot be easily removed as there can be other blocks of memory that were allocated before it. Each time a function is called in a program, a block of memory is allocated on top of the runtime stack called the activation record (or stack pointer.) At a high level, an activation record allocates memory for the function’s parameters and local variables declared in the function.

Programmers commonly use stack tracing during interactive and post-mortem debugging. End-users may see a stack trace displayed as part of an error message, which the user can then report to a programmer.

A stack trace allows tracking the sequence of nested functions called – up to the point where the stack trace is generated. In a post-mortem scenario this extends up to the function where the failure occurred (but was not necessarily caused). Sibling calls do not appear in a stack trace.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Some say this is where its at: python!

Python is an interpretedhigh-levelgeneral-purpose programming language. Created by Guido van Rossum and first released in 1991, Python’s design philosophy emphasizes code readability through use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects.

Python is dynamically typed and garbage-collected. It supports multiple programming paradigms, including procedural, object-oriented, and functional programming. Python is often described as a “batteries included” language due to its comprehensive standard library.

Python was conceived in the late 1980s as a successor to the ABC language. Python 2.0, released in 2000, introduced features like list comprehensions and a garbage collection system capable of collecting reference cycles. Python 3.0, released in 2008, was a major revision of the language that is not completely backward-compatible, and much Python 2 code does not run unmodified on Python 3. Due to concern about the amount of code written for Python 2, support for Python 2.7 (the last release in the 2.x series) was extended to 2020. Language developer Guido van Rossum shouldered sole responsibility for the project until July 2018 but now shares his leadership as a member of a five-person steering council.

The Python 2 language, i.e. Python 2.7.x, is “sunsetting” on January 1, 2020, and the Python team of volunteers will not fix security issues, or improve it in other ways after that date. With the end-of-life, only Python 3.5.x and later will be supported.

Python interpreters are available for many operating systems. A global community of programmers develops and maintains CPython, an open source reference implementation. A non-profit organization, the Python Software Foundation, manages and directs resources for Python and CPython development.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk 
Categories
Computing

At The Kanban

Kanban is a lean method to manage and improve work across human systems. This approach aims to manage work by balancing demands with available capacity, and by improving the handling of system-level bottlenecks.

Work items are visualized to give participants a view of progress and process, from start to finish—usually via a Kanban board. Work is pulled as capacity permits, rather than work being pushed into the process when requested.

In knowledge work and in software development, the aim is to provide a visual process management system which aids decision-making about what, when, and how much to produce. The underlying Kanban method originated in lean manufacturing,[which was inspired by the Toyota Production System. Kanban is commonly used in software development in combination with other methods and frameworks such as Scrum.

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk
Categories
Computing

Where Is Svelte?

There are many scenarios where Svelte technology is used and with some research in to svelte as a topic, I have found that it is beneficial to use simply because svelte is faster than react and Vue which are its main competitors. People now are seeing a lot of potential in svelte, and there is a thriving community now. The svelte technology can pretty much be used in any web development that you want you can build an entire web app if you want and because it is a compiler that takes your components, converts them into efficient JavaScript files with a very small amount of framework code that surgically updates the DOM. Frameworks such as react and Vue use a virtual Dom to optimize rendering changes, which just isn’t as fast. Svelte however provides reactively without using a virtual DOM it does this by tracking changes to the top-level component variables that affect each component renders and only we are rendering those parts of the Dom when changes are detected this is why svelte has better performance.

There is a company in brazil called stone that makes point of sale systems that you can put credit cards in to pay for things, they tried building an interface for these devices with React and Vue and other frameworks and they couldn’t get the results they wanted it was just to slow the user experience wasn’t great they built it with svelte instead and it went amazingly and there are now 200,000 devices on the streets of Brazil running svelte, these are low power devices that don’t have the same amount of memory and CPU capacity we have on a smartphone nowadays.

Mustlab is another example of a company where svelte technology is being used to make applications for smart TVs, svelte is the perfect fit for them it allows them to be extremely productive without having to worry about performance, they are expanding svelte now into other areas of the business including the control panel for smart home automation. The future is the embedded web you’re wearables, internet of things and smart TVs.

Svelte makes many tasks easier including defining components, managing component state, managing the application state and adding animations. Its ability to produce bundle code that is significantly smaller than the alternatives. Having you work with plain vanilla JavaScript, CSS, and HTML with a little HTML conforming syntax on top making it really easy to learn.

It’s been called the “disappearing framework.” This feature is probably Svelte’s greatest innovation, and every framework is trying to adopt this now. There are zero client-side dependencies. None. Just the end JavaScript of the component itself.

https://derniercri.biz/

https://www.derniercriweb.solutions
https://derniercri.biz
https://www.abcontractor.co.uk