Introduction June 21, 2019 by Riston Leave a Comment An Introduction to User Experience Image: O12 I imagine the scenario leading to your visit is that you likely have an interest in developing a website, but have little experience with doing so and are looking for a good place to start. If you have not really thought seriously about UX before, and are just need a concise introduction to the subject so that you are familiarized enough with it to be able to communicate effectively with your designer, then this rather brief series is for you. If you already have a well-developed idea of what you want, feel free to skip ahead; although, the since following posts were designed to be short and to-the-point reading through them should not be waste of your time, at least in terms of making sure you have given some consideration to all the basic aspects of design. We will be covering some of the most important topics for communicating your brand, such as user research, color and typography, and generally making your app easy to use. It will help get you thinking in the right direction so that if you contract a professional developer/designer you will already be a little ahead of the game and able to communicate your ideas more effectively. If you are just starting out in a venture where you are creating a brand, many of the elements discussed will be important first-steps for making sure that your application’s design conveys what you are all about more efficiently.
Know Your Audience June 20, 2019 by Riston Leave a Comment Planning the Foundations Image: Pete Linforth Understanding your target audience and why they should want to visit your site or use your app is essentially the most important element in considering an appropriate design. Understanding some of the basic principles underlying the psychology of your intended audience, such as why they are visiting your page, what problem do they intend to solve by using your app, or how they will integrate your services into their lifestyle and workflow are key to the success of your project. In a nutshell, your solution should be usable. The goal of usability is to having an application that is effectively “transparent”, allowing the user to accomplish the intended task without really noticing the interface in-between. The body of knowledge built on top of this area study encompasses an array of disciplines that include art and design, basic engineering principles, and behavioral and cognitive psychology. Fortunately, you do not need an extensive background in any of these disciplines to make useful decisions. One important consideration for thinking about usability design is the resource of User Research. While not everyone looking to build a basic web site may have the funding to contract formal user research, there are a variety of sources of established insights compiled on the subject. Coglode provides one such resource of easily digested, pre-compiled insights. All of these insights are as effective for developing a content strategy as they are for executing good design principles. Finding ways of providing a productive user experience while not frustrating, belittling, or otherwise facilitating a negative response from the user while guiding them to favorable action is the primary focus here. Nearly everyone wants a slick design, easy to use interface, including a modern layout (the exception of course sometimes being boutique or niche sites that do not want these traits for overriding reasons). The question is: how do you want to facilitate the users journey? While this is largely the shared job of both your designer and content strategist; however, it will greatly help to expedite the development process if you have a few ideas on the subject going in.
ChucK: Scripting For Music Composition and Sound Synthesis. January 17, 2018 by Riston Leave a Comment The ChucK programming language was developed by Stanford University’s Dr. Ge Wang, under the supervision of Dr. Perry Cook, for the purpose of music composition and digital signal processing. The ChucK language distinguishes itself from other similar languages by providing a simple, yet elegant, syntax that is both easy for artists that are new to programming yet versatile enough as a language to allow experienced programmers to design complex digital signal processing “applications”. Another of its key features is that it allows users to change code in order to alter real time performance, as well as providing a built in set of interfaces for accepting live input from analog, midi, and other digital sources. The MOOC “Programming for Digital Artists and Musicians”, which is offered through the Kadenze learning platform, is led by an active contributor to the development of the Chuck language, Director of the Music Technology program Dr. Ajay Kapur at the California Institute of the Arts. Overview of the Language and its IDE, MiniAudicle ChucK is fully functional as an Object Oriented Language, and is syntactically similar to other compiler languages such as Java and C++, in that it requires the strict declaration of data types for both variables and function parameter signatures. Fortunately, this process is somewhat simplified by allowing “string”to be a native data type, instead of having to import the string library to use in place of char arrays like in C++. The language also allows the ease of use of dynamic, multi-dimensional arrays without having to include outside container classes like in both java and C++. The language also contains the usual slew of default operators allowing mathematical operations, concatenation, plus one very unique operator: the “=>”, or Chucking operator, which covers both variable assignment plus the execution of a particular process. The language comes with two substantial libraries, the Standard Library for working with data in the program, and the Math library for performing essential mathematical computations such as exponents and trigonometric functions. The Synthesis Toolkit Library, written in C++, is also integrated into the language, and it is incorporated through the languages built-in Unit Generator (UGen) objects. UGen objects also contain a wide range of other objects, most notably various types of oscillators and effects such as delay and other filters. Many of these built-in UGen classes feature a large number of functions that can be used for manipulating basic sound properties such as frequency and amplitude, and in the case of Synthesis ToolKit objects more elaborate functions allowing for physical modeling such as pluck position, phonemes, and string tension. The standard library also allows for a host of interfaces for live performance, such as Midi, and one of the primary built-in features provides functionality for working directly with both analog-to-digital (“adc” object) and digital-to-analog(“dac” object) conversion. Another key element of Chuck is the necessity of duration, and time is integral to running any program written in the language, and a variable of duration must be “Chucked” or “=>” to “now” in order for the program to do anything at all . The language also allows the development of highly customized classes, that can composed of various unit generators, audio samples, and effects that can be chained together in highly complex arrangements modeling signal flow. These classes can also contain accessors and mutators, and the standard library includes functions for converting frequency to MIDI and MIDI to frequency to allow for simplified scoring and manipulation of object instances. The miniAudicle is the primary Integrated Development Environment for working with Chuck, and it features three essential windows: the text editor, the console monitor, and the virtual machine. The text editor includes highlighting for Chuck-specific keywords, and the header bar contains the “Start”, “Stop”, and “Add Shred” buttons which are responsible for controlling the virtual machine. Since Chuck also provides the option for working with multiple threads(shreds) and concurrency(adding “Shreds” to run concurrently is called “sporking” in Chuck), having a window showing the active processes is vital, and this is the job of the virtual machine window. The virtual machine window also shows the amount of time that each thread has been executing since it was initialized. The console monitor fulfills the basic functionality of any other IDE console. Composition Methodology ChucK’s standard library does not specify specific divisions of Common Musical Notation, such as notes defining pitch or rhythmic duration; however, through utilizing classes the composer can define musical components according to personal preference. By setting a basic tempo using duration, it is a simple process to derive and assign duration variables such as whole, quarter, and sixteenth notes, and to invoke these classes throughout the execution of the program. ChucK’s duration type allows for a time resolution as low as a “sample”, the same constant derived from the Nyquist theorem of 1/40,000th of a second, up to a week( I would posit that this lengthy duration’s usefulness would be limited to soundscape installations). Similar to common notation standards for rhythmic duration, the lack of notation indicating pitch can easily be worked out by the designer/composer. Since the standard library allows for the two-way conversion of MIDI notes and frequency values, it is easy to define a scale using an array structure where the values are MIDI pitch values. The versatility of the program also would allow the designer to define completely customized intervals using any tuning specification desired, such as designing one’s own array of intervalic ratios and applying them to a base frequency. Scoring is also highly customizable, and the use of classic programming loops and other control structures provides a versatile medium for writing and composing music. For loops and while loops can have their iterations regulated through a series of duration values, such as “beats”, and conditional statements can determine the execution of specific code blocks based on any relevant boolean expression. All of these control structures can in turn be wrapped into functions and classes, in order to divide a score into easily read sections. One of the most useful applications of ChucK; however, is its versatility as a sound synthesis engine. Using the basic oscillators and more complex STK instrument unit generators provides comprehensive building blocks for applying additive and subtractive synthesis. Arrays of oscillators can be created, and each can have its fields accessed and mutated by index. Synth pads can be created by chaining a variety oscillator and STK objects through effects and digital filters. It is easy to even design granular synths that take wav samples and partition them by divisions of ChucK’s sample duration. Final Project/Experimentation For the final project of this MOOC I was bound by substantive limitations, which while certainly not the best piece of music that I have written, was nonetheless interesting. I defined the scale as: [50, 52, 53, 55, 57, 58, 60, 49] @=> int dMin[]; which contains the essential MIDI notes for the d minor as the name implies. I generally also found it to be useful to define basic patterns as 2d arrays, making it easier to distinguish between pitch and duration values: [[2, 1, 4, 0],[0, 6, 8, 14]] @=> int bowPat1[][]; Also control structure could easily guide the execution of the score, as in this segment: Machine.add(me.dir() + “grain.ck”)=> int grainId; while (measure < 60){ if (measure < 2){} else if (measure >=2 && measure< 4){ voxPlayer(beat, voxPat1, dMin[0]); } else if (measure >=4 && measure< 6) { drums(beat, drumA, 2); voxPlayer(beat, voxPat1, dMin[0]); } else if (measure >=6 && measure< 10) { etc……….. The full code for this assignment can be viewed on Github . The initialize.ck file serves a very similar function to the traditional makefile, and the Machine object in the ChucK language provides a intuitive means to provide compiler instructions. Conclusion The ChucK programming language is extremely promising as a way to provide a new tool for musicians to not only add to their creative palette, but to work directly with sound itself. One of the benefits of using an environment for music creation and sound manipulation like ChucK is that the artist is no longer confined to the limitation of their chosen DAW, and in many cases ChucK can be interfaced with other DAWs. One of the challenges for an artist using ChucK, as with most other forms of generating computer music, is that it is difficult to incorporate a truly human feel. However, it is possible to do to a large degree, and in my personal experience I have found that ChucK works well for designing aspects to integrate into more traditional methods of music-making. Here is the final project I did for this course, a little cold and computerized, but not terrible: Final Project Also, a track on Bandcamp where I had programmed a basic granular synth to highlight the bridge toward the end of the piece: Emptiness by Toccata Nosferatu Reference: Kapur, Ajax. Programming for Musicians and Digital Artists. Manning Publications, Shetland Island, NY. 2015. *This book is supplementary to the Kadenze Course by the same name: Introduction to Programming for Musicians and Digital Artists
Personality-Informed Neural Training for Cyber-Security Solutions December 16, 2017 by Riston Leave a Comment Image courtesy of Geralt. “If you know your enemies and yourself, you will not be imperiled in a hundred battles… if you do not know your enemies nor yourself, you will be imperiled in every single battle.” -Sun Tzu Introduction Information security is presently one of the most rapidly expanding fields in the realm of information technology due largely to the complexity of emerging interoperable networks. Contemporary networks contain more than just laptop and workstation computers, and while mobile devices such smartphones and tablets are surpassing traditional machines in consuming a greater percentage of network resources, the variety of devices that are interoperating is increasing further with developments in more pervasive technologies such as “smart buildings”, the Internet of Things, embedded software as found in self-driving vehicles, and medical devices that are capable of wirelessly transmitting information. The phrase “complexity is the enemy of security” has become axiomatic in the cyber-security industry, and the increasing complexity of network systems has provided entirely new planes of attack vectors that have rendered many traditional strategies to be effectively useless. Techniques and algorithms involving machine learning and adaptive artificial intelligence are also growing, and there are many firms who are working to integrate machine learning techniques into security protocols. Attacks on a networked system can manifest in a multitude ways, ranging from basic web based attacks involving cross-site forgery and SQL injections to more sophisticated orchestrations such as Distributed Denial of Service or Advanced Persistent Threat attacks. “Cognitive computing scans files and data using techniques such as natural language processing (NLP) to analyze code and data on a continuous basis. As a result, it is better able to build, maintain, and update algorithms that better detect cyberattacks, including Advanced Persistent Threats (APTs) that rely on long, slow, continuous probing at an almost-imperceptible level in order to carry out a cyberattack.”[1] Within the last few years, analytics has also provided insight into determining some of the psychological characteristics of computer users based on social network behavior patterns, thus opening the door to using analytic techniques for discerning personal traits of potential threat agents. Being able to gain insight into the personality of attackers themselves may yield useful information that could provide leverage for an adaptive system to not only detect but effectively defend against an attack. Integrating adaptive AI techniques such as Deep Neural Networks with cybersecurity objectives may be the most effective approach to solving the increasing surface area of attack vectors in modern and emerging networks, and the efficacy of this approach could be greatly enhanced by using psychological determinants that would enable the construction of strategically useful threat models in real time. Assets, Threats, and Current Practices One of the first steps required of any organization when developing a security policy is to accurately asses that organization’s assets, relative to both their intrinsic value and to the collateral damage that could be caused by those assets either being rendered unavailable or exploited by malevolent actors. While understanding the value of an organization’s assets is generally useful for determining the appropriate measures for securing a given network [2], understanding the nature and value of assets can be useful for providing insight into building effective threat models. Understanding common characteristics of threat agents such as intention, motivation, and their source can provide a useful basis for the organization to build a taxonomical hierarchy of potential threats[3]. Both the classification and prioritization of these various threat agents can used to provide features and rules for informing the training procedure of an AI’s neural network. Data mining using social media networks have provided useful resources for researching the potential for predictive personality modeling. One such study, reported in 2013, used a variety of features including linguistic and other social network patterns to determine personality characteristics, and the results were effective enough to encourage future research in this field [6]. The measures of personality used for this study included what are termed the “Big5 test”, which comprise the determinants of Extroversion, Neuroticism, Agreeableness, Conscientiousness, and Openness. It is likely that further research in this domain could render insights into common threat agent attributes such as skill and motivation, or indicating whether they are operating purely motivated by personal gain or anger. This may, in turn, help an AI to effectively exploit the attacker’s personality weaknesses in order to inform an appropriate strategy. The most common implementations of network security involve both Network Intrusion Detection Systems(NIDS) and Network Intrusion Prevention Systems(NIPS), and most applications of these systems are a composite of both approaches. Signature based detection models have traditionally been the most common approach to detecting attacks; however, with the increasing sophistication and variety of attack methodologies, this approach is proving to be ineffective as a stand-alone solution. Researchers have turned to refining anomaly-based detection methods, but in its current development, this approach is still challenged by often yielding false positives for otherwise normal network behavior. [4] These short-comings for ADNIDS have been successfully mitigated by the adoption of deep learning techniques for accurately classifying network anomalies. [5] Basic Neural Networks and Current Strategies The concept of neural networks as a paradigm for designing adaptive artificial intelligence has existed for nearly fifty years, and the original construct of an artificial neuron was the perceptron. The perceptron, developed by Frank Rosenblatt, is essentially a function which accepts a combination of binary inputs in order to produce a single binary output. The most common adaptation of the perceptron used in contemporary models is the sigmoid neuron, which is a perceptron that allows for both weighted inputs and a bas factor for the neuron itself. The weighted input and bias attributes of the sigmoid neuron help to facilitate more effective decision making for the algorithm as a whole, and the training of these neurons involves the ability for the specific weights and bias attributes to adapt according to the information provided to it. [7] The architecture of deep neural networks are comprised of essentially three classifications of neurons: an input layer, an output layer, and a series of “hidden” layers in-between. The number of these hidden layers varies according to the specific implementation, and a greater number of these intermediary layers allows for more specialized training of the network.[7] Approaches to training neural networks include supervised, unsupervised, and semi-supervised training, with self taught methods considered the most valuable avenue of research for future implementation. The efficacy of a given neural network implementation is generally judged according to its accuracy, and the metrics of determining accuracy are defined as precision, recall, and F-measure, the last being the harmonic mean between precision and recall. [5] In research, most deep neural network implementations are trained using the Network Socket Layer – Knowledge Data and Discovery dataset, the most pervasive version being the KDD Cup 99 dataset. These implementations are generally used to parse through network logs in order to detect anomalies in network activity, such as unusual packet volume or other user activity. When discussing the viability of deep learning strategies, and unsupervised approach is considered to be the most useful approach, and one methodology includes rule-based clustering, which allows the programmer to establish specific rules and objectives for the algorithm while allowing the network to determine its own categorizations. Dynamically Incorporating Personality Into Threat Models Persona non Grata is a threat modeling approach that specifically tasks users with modeling threats according to an attacker’s potential motivations and abuses; however, similar to the signature based NIDS, can be limited to only a predefined subset of threat agents.[8]Threat agent personality characteristics, at least of the intentional variety, can probably be effectively reduced to a specific subset that can serve as a selection of rules for defining the features of a neural network. By both defining anomalous network activity in conjunction with being able to appropriately respond to a given threat based upon its distinguishing characteristics should be the primary goal of the the neural network. In order to generate a normalized baseline of network activity, it is necessary for the implementation to be able to construct accurate user models in order to determine that the user is an authorized operand of the network. One possible strategy for implementing user profiles is by implementing a silent application of cognitive and behavioral biometrics, such as keystroke dynamics, that could be developed dynamically over time.[9] Using such a practice could help determine if an attack is being orchestrated through compromised access controls, such as a password that had been hacked. This level of detailed user profiling could help establish and maintain a more accurate baseline of network activity, while also detecting compromised accounts. Defining attacker characteristics and normal network activity would provide a very useful and dynamically configured subset of rules whereby a neural network could train itself and adapt in perpetuity. Since these algorithms operate by continuously scanning through a stream of network logs and other network data, it is important to implement an algorithm that can initiate a Dynamically Expanding Context of analysis while making certain that unimportant anomalies are properly discarded in order to avoid unnecessarily invoking defensive and emergency procedures. This could manifest through a series of virtualized scenarios, such as when designing a predictive algorithm for a chess game, and a pre-defined hierarchy of procedures could be initialized based on stochastic considerations of these virtualized scenarios. Ethical Considerations and Conclusion As with any case of invoking artificial intelligence in relation to predicting and monitoring personality attributes, there are ethical considerations that must be integrated into the development process. In profiling user activity, it is important to not allow the algorithm to reveal what is potentially embarrassing or exploitable information on the user, especially if the user’s activities are in compliance with the organization’s use agreement. There is the likelihood that data from using predictive algorithms could be used to execute discriminatory bias against minorities or persons with underlying mental conditions, such as in the case of criminal risk scores [10]. For these reasons it is important that ethical considerations be incorporated into the design process, that there be limitations to the application’s offensive capabilities, and that there should be included sufficient administrative override. Beyond the mentioned ethical concerns, incorporating personality traits common to threat agents into rule based neural network training has the implication of providing an invaluable toolset to the development of future models of integrated security systems by allowing the AI to essentially “get into the head” of a malicious attacker and exploit their natural inclinations to their disadvantage. An attacker predisposed to irritability and neuroticism could be goaded into making a mistake out of increased frustration, or perhaps if the AI determines that the attacker is financially motivated and is not technically proficient, could be tricked into providing personal identifying information by exploiting their desire for money. This approach could save an organization resources wasted on unnecessary downtime by properly defining normalized user activity through personalized biometrics against which to accurately detect anomalous network activity. References 1. Greengard, Samuel. “Cybersecurity Gets Smart”, Communications of the ACM, Vol. 59, no. 5, pp. 29-31. 2. Merkow, Mar S. & Breithaupt, Jim. Information Security: Principles and Practices. Pearson Education Inc. Indianapolis, Indiana. 2nd ed. 3. Join, Mouna; Rabai, Latifa Ben Arfa; Aissa, Anis Ben, Procedia Computer Science, Vol 32, 2014. pos 489-496. Classification of Security Threats in Information Systems 4. Lambert, Glenn Monroe. Security Analytics: Using Deep Learning to Detect Cyber Attacks. University of North Florida School of Computer Science, 2017. 5. Quamar Niyaz, Weiqing Sun, Ahmad Y Javaid, and Mansoor Alam. A Deep Learning Approach for Network Intrusion Detection System. College Of Engineering The University of Toledo. 6. Dejan Markovikj, Sonja Gievska, Michal Kosinski, David Stillwell. Mining Facebook Data for Predictive Personality Modeling.AAAI Technical Report WS-13-01 2013. 7. Michael A. Neilson. “Neural Networks and Deep Learning”, Determination Press, 2015. 8. Shull, Forrest. SEI Blog, Nov. 11, 2016. Cyber-threat modeling: an Evaluation of Three Methods. 9. Ciampa, Mark, Security+ Guide to Network Security Fundamentals. Engage Learning. Boston MA, 5th ed. 2015. 10. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. Pro-Republica, May 23, 2016. Machine Bias
Microservices: The Capstone of SOA and Agile Methodology. December 8, 2017 by Riston Leave a Comment Image courtesy of Geralt. Service Oriented Architecture evolved from many of the defining characteristics of the Object Oriented Programming paradigm, including key concepts such as encapsulation, containerization, and reusability. SOA developed, initially in the 1990’s, as a response to the inefficiencies encountered while attempting to update legacy software systems, where often updating one aspect of the system could cause complete failure of the system. The Agile development methodology also arose to mitigate the rather cumbersome “Waterfall” paradigm of engineering that dominated most software development prior to the mid-2000’s, and places emphasis on people and the interactions between them over rigid processes. This approach to software development encourages the fast delivery of reliable software in an adaptable environment where engineers are able to work according to changing client specifications. The agile development philosophy eventually progressed to the integration of development and operations teams, assuring the prevalence of the practice commonly called DevOps. The focus of the SOA practice of segmenting an application into reusable language-agnostic services eventually evolved into the Micro-services architecture, which is one of the principle tools used by DevOps engineers; which in turn allows for the rapid deployment, integration, and automation of development processes. Brief Overview of SOA Author Thomas Erl defines eight basic principles of Service Orientation: a formal service contract, loose coupling, abstraction, composability, reusability, discoverability, autonomy, and statelessness.[1] Having a formalized contract allows for the interrelationship between services to be defined, while loose coupling means that no component service is dependent on other services. Abstraction in the context of SOA is parallel to the OOP concept of encapsulation, in that it only makes data and logic available consumer entities as necessary, which in turn facilitate easy reuse of the services. Discoverability is a key element to the usefulness of a given service, since if the service is not discoverable, no-one will know to use it. Autonomy and statelessness are also key to allowing services to be used and updated independent of the rest of the application. All of these principles then work together to facilitate composability, wherein services can be combined to execute specific business needs of the application. There were many issues generally encountered by the implementation of Service Orientation into previously monolithic legacy applications, and most modern applications developed by building service components are polyglot and require intelligent endpoints. Thus was born the Micro-services architecture, which in essence fulfills the promises brought forth by SOA[2]. The micro-services architecture has worked effectively in the proliferation of cloud-based applications and development, and give developers of specific services the versatility to use the best languages and tools for the job at hand. Also, the design principle of “Smart endpoints and dumb pipes”[3] further concretizes the SOA principle of statelessness while increasing service interoperability. These features have allowed microservice architecture to blend perfectly with the philosophy of agile development, and consequently is a defining tool for modern DevOps practices. Brief Overview of DevOps and Agile Methodology The defining characteristics of the agile methodology are rapid development and deployment facilitate by an emphasis on client and developer interaction. “People over Process” is one of the common mantras of this philosophy, and the inclusion of consistent client input facilitates a degree of agility in the development process, allowing for necessary changes to be implemented throughout all stages instead of having to wait until a finalized version is released. Another major principle of the Agile Methodology is the emphasis on simplicity, and maximizing the amount of work not done. This simplicity can help to mitigate an excessive “cargo cult” mentality among members of a development team by eliminating the addition of unnecessary moving parts that may cause problems later down the road. DevOps engineering shares many principles that are congruous with Agile philosophy, although the primary focus of DevOps is to integrate both development and operations team for collaboration throughout the development process. Much as in the client interaction facilitated in agile development, the inclusion of operations in the development process allows for insights to be shared throughout the lifecycle of the application, rather than each team having to wait until the deployment stage to start addressing errors. The primary practice areas of DevOps can be defined as: Infrastructure Automation, Continuous Delivery, and Site Reliability Engineering[4]. Infrastructure automation is the practice of automating the scripting of the software environments, such as servers and operating systems. Continuous Delivery, which can be lumped in with continuation integration, is the process of rapidly deploying and updating an application. The continuous integration assures that the code-base is consistently up-to date with the latest versions across development teams. Site Reliability Engineering has as a general concern scalability of the application, so that as the application extends in functionality and volume it continues to operate as intended. The application of these practices are consistent with agile philosophy, and the micro-services architecture is particularly well-suited for the aims of DevOps. Best Practices and Application of Microservices in the DevOps Environment Most modern applications are built through the construction and leveraging of various Associated Programming Interfaces in order to fulfill various various services. Scalability concerns remain at the forefront of challenges faced by modern development teams, and the introduction of a microservice architecture allows teams to use the best tools for the specific service that they are tasked with building and maintaining. The most notable advantages of integrating DevOps and microservices can be defined as: each service is independently deployed, each service scales independently, services can feature different scripting languages, failure of one service will not directly impact another (unless of course the service is a composite), communication between teams are enhanced, and maintaining the application is made easier[5]. All of these features are directly relative to the essential principles of both Service Orientation and Agile methodology. One of the benefits of using a DevOps approach while implementing a microservices architecture is that often times the services are distributed among various servers, requiring the ability to work adaptively across multiple OS environments. Since micro services generally follow the SOA principle of loose coupling, the failure of one system should not necessarily affect the entire system, thus making the debugging process more manageable by helping to isolate both the service and its respective environment. This allows for greater flexibility among development teams that are tasked with individual services, in that they are able to more fully dictate what servers and environments are best suited for their respective service, and to have greater control of configuration of that environment. As cloud computing and other various “X-As-A-Service” practices continue to rise in popularity, so will the adoption of DevOps and its associated microservice architecture. This adoption will greatly aid in facilitating business scalability, allowing businesses to easily adapt to present and future considerations regarding their application. Being able to add a service or update an existing service without concern over breaking the entire application offers a significant advantage over monolithic legacy code-bases, and the integration of automation tools for the configuration of various disparate environments allows for greatly expands the tools available for a specific business requirement. DevOps is also already being expanded into the realm of Security, and in some cases is called DevSecOps. Agile philosophy and Service Orientation have found the fulfillment of their promises in DevOps and Microservices Architecture, and the practices derived from these will essentially propel the future evolution of software development. References 1. Erl, Thomas. The Principles of Service Orientation Part 1 of 6: Introduction to Service Orientation 2. Anuff, Ed APIGEE. May 13, 2014. API-Centric Architecture: SOA Gives Way to Micro Services 3. Morganthal, JP. DevOps.com October 16, 2017. SOA vs. Microservices 4. Mueller, Ernest. The Agile Admin. Aug 2, 2010- July 24, 2017. What is DevOps? 5. Subhakars. WS Tech Blog Feb 11, 2017. Service Oriented Architecture and DevOps Complement each other.
Muses, Memory, and Expedient Comprehension October 17, 2017 by Riston 1 Comment Practical advice for students and others seeking to increase their ability to master new subjects. Introduction This post is part multi-book review, part inspired speculation, and part technique. At the end of the post I also include an application of the suggested technique, and illustrate some of the insights gained while developing this technique. To provide the reader with a short background of the author of this post, my early years at college were spent studying music and psychology, and I have held a long interest in mythology and “esoteric” techniques, such as visualization and meditation, which are useful in improving quality of life. Music is a discipline requiring the intensive use of the mental faculty of memorization, and psychology has been invaluable to providing functional, working models of cognition and learning. After having to drop my college career for an extended period of time due to personal circumstances, I continued to write and study music, and a couple of years ago began to teach myself basic computer programming. I had initially been inspired to learn how to program while studying audio engineering at UNC Asheville; however, whether through lack of focus or self discipline, I did not actually initiate the endeavor for quite a few years after I had left. Not having performed much in the way of serious academic or intellectual pursuit for several years, outside of writing music and lighter reading, the autodidactic approach to learning computer science was a little daunting at first. Fortunately, during that initial period, I had other goals and responsibilities that were my top priority, so I did not feel bad about my haphazard motivation. Eventually, the fundamentals of programming began to click in for me, and I decided that I wanted to finish out my bachelors degree in that field. My resolve to return to school was set, and since I had already established a lot of completed coursework I enrolled at the junior level. The first two semesters were extremely trying, and while I did well, I constantly found myself riding right on the deadline. Constantly struggling with the deadline is a terrible and stressful way to operate at any task in life, and while procrastination defined much of my attitude in earlier years, I have in recent years sought to eliminate it entirely from my life. One factors holding me back from getting ahead in classwork has been a slightly less than average reading speed. I read a great deal, and I enjoy reading, but a discipline such as computer science requires a lot of reading through technical documentation, and in an academic atmosphere you have even more reading to do on top of that! I also, during the previous summer, initiated some intense long-term programming projects making use of multiple languages (as most programs do), and realized that I also needed to revamp my memorization techniques, in order to avoid having to rely on Google and Github to remember less than pervasive standard methods and their syntax. Towards the close of the summer break, I decided that I wanted to do something to enhance my learning, so I ended up getting two rather inexpensive kindle books in an effort to augment my learning abilities: Unlimited Memory by Kevin Horsley and Speed Reading with the Right Brain by David Butler. While I cannot say that I am, after two months, a master of memory or have improved my reading speed by a incredulous 300%, I have used the methods in these two books to synthesize an expedient approach to breaking down and assimilating information from texts, and from the application of these methods have gained a great deal of insight. The funny thing is that subconsciously many of these insights and methods have made minor intrusions into my consciousness over the years in various forms through various sources, it just was not until reading these books that everything kind of snapped together, quite like a hundred piece jigsaw puzzle suddenly snapping together with practically no effort. While the end suggested technique is not entirely new or original, I had in fact developed it through my own inference, and hopefully will present it in an enlightening and intuitive manner. Memorization Techniques Unlimited Memory is a fairly concise and easy to read book written by a grandmaster of memory technique by the name of Kevin Horsley. According to the author, he was plagued throughout his childhood by dyslexia and had little hope of ever being able to perform well in an intellectual capacity. He had at some point began reading books by Tony Buzan, who is credited with inventing the mnemonic technique of mind mapping, and was set on the path to becoming world-renowned for his ability to memorize information. While I have not personally read any of Tony Buzans work, the basic technique is presented in Horsley’s book. Horsley’s book covers two basic and very old memory techniques, the method of loci and the “peg” method, which in turn underpin the fundamental basis of Burzan’s mind-mapping technique. The Method of Loci, more commonly known as making use of “memory palaces”, exploits spatial memory which is a mental capacity commonly found in more highly developed members of the animal kingdom. Spatial memory forms the basis of cognitive mapping, a concept that has been experimentally verified through psychological research and is the cornerstone of cognitive psychology. The utilization of “memory palaces” has been in use for millennia, which was an invaluable technique for learning, digesting, and being able to recall information before Gutenberg formalized the printing press in the 16th century. The “peg” method of memorization was developed much more recently, but has also provided successful results. One of the basic principles of the peg method is build up a set of correspondences with each number, either through words that rhyme with the number (i.e. “one” with “bun”), or through equivocating shapes between numbers and letters and objects. The basic idea is that the practitioner turns numbers into associative pegs that can then be used to memorized lists in order. Horsley essentially expounds on varieties of these techniques throughout the book, and frames the information in a practical context by providing adequate examples. He also emphasizes proper mindset to approaching learning, such as cultivating interest in the material being learned. While performing these techniques will not make you into a master memorizer overnight, they will help you to perform feats of memorization quite quickly, and with diligence you are limited only by your own imagination. Reading Comprehension Another book which proved to be quite helpful was Speed Reading with the Right Brain by David Butler. I had never been very impressed with the notion behind “speed reading” gimmicks such as making use of rapid eye movements or skimming through words hoping for comprehension of the material to magically manifest from my subconscious, so Butler’s approach to increasing reading speed was for me an appealing concept. The emphasis of “reading with the right brain” is based, as one might expect, on the imaginative faculties of the mind. The primary focus of this work is on making use of the techniques of visualization and conceptualization to increase reading speed through increased comprehension. Sometimes it is necessary to state the obvious, but reading and comprehension are only superficially different, they are in essence the same thing, or, at the very least, without comprehension reading is a fairly useless activity. The fundamental method that Butler uses in this book involves reading phrases instead of words. While some “speed reading” gimmicks advocate scanning line by line, which makes little rational sense, reading units of ideas instead of words or arbitrary lines of text is an highly effective technique. Like most techniques, this takes some practice, and every chapter of Butler’s book is punctuated with reading exercises. While learning to read ideas and phrases through conceptualization and visualization may at first slow down reading speed, the payoff is experienced immediately by almost automatically eliminating poor reading habits. Using the “right brained” approach to reading aids substantively to the art of comprehension, and by proxy enhances retention and digestion of the material read. The Muses and Imagination So, the reader is probably curious about the inclusion of the Muses in the title of this posting? Given the insights and techniques of both the aforementioned books, I have gained some insight into why the Muses, specifically the mother of the Muses Mnemosyne, held the level of veneration afforded to them by classical cultures. Mnemosyne, from whom the word memory is derived, was in classical literature the goddess of memory and imagination. She was considered to be the daughter of Uranus and Gaia, and from these two deities did she receive her attributes. Through her relations with Zeus (Jupiter, god of Prosperity and Philosophy) were born the nine Muses. Attributed to the Muses is the development of all the higher aspects of human culture, such as art, music, history, and science, though in modern common association they are generally related to poetry. Poetry was and has been somewhat of a universal constant for the transmission of all of the important kernels of individual cultures, including their collective wisdom, traditions, and mythical aesthetics. In antiquity, the substantive portion of a culture’s knowledge was transmitted orally, from parent to child, or teacher to student. This, in turn, required the ability for the individuals responsible for this transmission to be able to memorize vast sums of information as accurately as possible, and the techniques of verse and rhyme afforded a effective and entertaining means of doing so. As modern cognitive psychology has confirmed, learning is predominantly an associative activity, and in turn the art association is a creative act involving the imagination. We can, therefore, infer a great deal of insight from noting that the mother of higher culture is also the goddess of memory and imagination. Both of the books reviewed above affirm that the key to intellectual pursuits lie in the fertile grounds of imagination and creative association. Many people today view intellectual and technical subjects as “dry”, and this erroneous convention without doubt holds them firmly placed in relative mediocrity. Overall, thousands of years of encoded human experience has subverts the false dichotomy of “left-brained” and “right-brained” activities and people, and I posit the notion that the art of learning is necessarily both of these things. These misconceptions regarding learning are like cultural viruses that have been passed on for generations. A Suggested Technique While this technique is, in and of itself, not entirely new or unique, I did in many ways devise the method through inspiration after not-quite completing the two books reviewed, and therefore hope that perhaps I can offer a slightly different perspective on them. This technique is based around the notion of breaking down a text for relative memorization and mastery of the subject matter contained therein. My approach to this technique is based on the notion that memorizing the table of contents is quite like generating associative “pegs” for the information they contain. This notion can be further broken down by memorizing the subheadings of each section of the chapter, but this level of memorization is determined solely by the level of acuity according to which you may wish to memorize the information. This technique can quite literally be classified as a “Tree of Knowledge”, a cornerstone concept that pervades the worlds cultural mythologies in some respect or another. The book is the tree where the chapters and subheadings are its branches, and the leaves and fruiting bodies are the information contained within its pages. Memorizing the titles creates associative pegs where, using the visualization techniques involved in “reading with the right brain”, the ideas are able to bloom within a structural matrix of context and meaning. This also gives one the perceptual advantage of reading from the top of the tree, where one can maintain a complete, birds-eye view of the information being presented, and automatically every idea one reads remains in its proper place in relationship to the other material presented. One can also view the technique as creating a skeleton, where the body is allowed to flesh out from the bones, creating a complete, complex, and lifelike being. First Experiments For many years, a series of books had sat upon my shelf waiting to be read, a two-volume series titled Musimathics, written by Gareth Loy. I had wanted to read the two books since getting them, but had been waiting for the “right time” to endeavor them. While I will not get into the subject matter in detail here, suffice it to say that these two volumes are a very thorough treatment of the relationship between music and mathematics, an area that underpins much of my life’s work. The work is a multidisciplinary study, coalescing music theory, tuning systems, physics, acoustics, psychoacoustics, and digital signal processing into a veritable course of study. This work apparently took the author nearly a decade to complete, and during this period of time he developed a C++ style programming language for composing music named Musimat. With about six weeks left before the start of a new semester, I decided that it was finally time to crack into the rather dense two volumes. With the first volume, I memorized the table of contents just using the basic peg technique. I experimented with the other techniques I picked up in Horsley’s book, throughout this book, and found that I was quite successful in retaining the rather detailed information in the various chapters. Before I finished this book, which took me about three weeks to get through, I had decided to use the “memory palace” technique on the second volume, as I believe that the peg method is a little more efficient at memorizing singular concepts, rather than as a directory for more voluminous expanses of information. The second volume of the Musimathic series is exponentially more dense in mathematical concepts, although it does gently guide the mathematically less experienced reader through a detailed, explanatory introduction to complex numbers and the equations generally used to model sinusoidal motion, such as Euler’s identity. Over five hundred pages of mathematically dense reading material was a formidable test for my development of learning technique, especially for just starting with it. A good bit of the information in the first book I had encountered previously, although in that text much more detailed. The second volume contains a lot of material that previously I had only been superficially introduced to, although as a synth player I have frequently encountered and used techniques like granular, subtractive, and additive synthesis. There are some sections that I will probably reference in the future, but overall I think that before improving my approach to learning I would have floundered greatly in reading through these two volumes, and my retention of the material more haphazard than would be practically useful. Conclusion For anyone who has a wide variety of interests, improving their ability to metabolize volumes of information quickly and efficiently is of paramount importance. Being able to do digest texts quickly helps in maintaining the larger picture of the subject being explored, and reading too slowly and without certain memorized “staves” of information around which concepts can easily associate can adversely the individual’s ability to grasp the subject of the book. It is important to note as well that this approach to learning has been around as long as human civilization, and that the ancient Greeks codified the innate human abilities to do so in their myths of the Mnemosyne and the Muses. I believe that there are many parallel applications of this technique and the insights gained therefrom, and that and that an endless variety of methods can be derived from experimentation.