Lyrici Triade

Lyrici Triade

Music, Software, Philosophy

  • About
  • Writings
    • Software Development
    • Technology
    • Music
    • Music Technology
    • Mythology
    • Art & Design
  • Guides
    • UI
  • Contact

A Journey in Product Development: Part 3 – Creating a Clickable Prototype

January 11, 2021 by Riston Leave a Comment

In Brief…

This is the culmination of the design stage before investing in any solid development decisions. Gathering user feedback from this artifact will be essential for finalizing and assembling a full-scope requirements document and technical design decisions such as infrastructure and stack. If the idea is terrible and not well received, at least the development cycle can be avoided before hiring in additional team members. In this case, I’ll likely be the sole contributor and will work to complete this project whether its well received or not, but in a more business-centered context this could be a important stage in whether the product lifecycle will progress or be stalled out.

Moodboards and bringing the app to life.

Since the moodboard for the design contains copyrighted material, I will not post a screen shot of it here, but you are welcome to check out my Pinterest board for this project. I feel that the design should have a look and feel appropriate for a music-centric app with “data-ist” aesthetics. A “dark mode” is almost always a great choice for visualizations outside of purely business-analytics oriented interfaces, and this project should emulate that style while using a vibrant color theme for actionable assets and visualizations. The moodboard will also continue to provide inspiration for developing the final visualization elements for the application.

General Flow

LogIn/Register Sequence

A fairly basic and generalized flow for this function should be sufficient. The only consideration is in later stages of the app’s development where we may wish to get the user to enter more personalized data for display in the online forum.

Loading and Login

For the Login process, the app begins with a loading screen (currently containing a placeholder for the logo), then follows a sequence that can be tested in the prototype. I may in the finalized design add a password set/reset screen, but that is a decision that can wait.

Visualizations

For the visualization feature of the app, I created a few placeholder visuals using Illustrator. There is also the button for selecting these visuals in the bottom toolbar. There are not only visuals for streaming data, but also some basic “waveform/spectral” visualizations.

  • Radial Visual (Generic but works as a placeholder)
  • Spectrogram Visual
  • Classic Waveform Visual

Linking things up

Adobe XD features fairly robust options for working quickly generating clickable prototypes and publishing them for review. It is quite fascinating to see all the connection widgets graphically displayed, and its even more interesting to see the proposed architecture while in clickable prototype mode.

Prototype Wired Up

In Action…

A basic prototype is now complete, and can be viewed live here.

Filed Under: Art & Design, Software Development, Technology, Uncategorized, Web Development, Writings

A Journey in Product Development – Part 2 : Decisions and Wireframes

January 9, 2021 by Riston Leave a Comment

Decisions

Noticing that there was little need for distinct and separate screens for sampling, visualization mode, and the home screen, I decided to combine these screens into one. the different modes and functions could be determined by buttons, menus, and widgets within the UI to facilitate interaction. The distinct screens now involve the main screen, the sample breakdown screens, the login/registration flow, and the online forum and repository.

While I have only made a few design decisions thus far, I have been able to break down the concept of the UI and the basic user-interaction flow more efficiently. The basic idea for the screens are laid out, and the basic interaction flow has been detailed using red annotations. All of the assets represented were developed personally using Adobe Illustrator.

Wireframe

Wireframe for Spectrafact app concept.

Filed Under: Art & Design, Technology, Web Development, Writings

A Journey in Product Development – Part 1 : Ideation and Content Mapping

January 6, 2021 by Riston Leave a Comment

Introducing Spectrafact

Intelligent acoustical investigation app for sound identification and visualization.

Description

Spectrifact harnesses the power of Machine Learning and predictive analytics to isolate and identify prominent environmental sounds. Each sound can then be broken down into their principle waveforms and analyzed for their spectral properties. The user can then choose multiple ways of visualizing the output data.

Functions

  1. Identification of specific sounds (mostly instruments, but other sound sources as well)
  2. Offer approximated waveform/wavelet properties that can be used for reconstruction via synthesis.
  3. Acoustic analysis for live sound mixing or basic acoustical investigation during field recording.
  4. Build network and repository of shared samples and associated data.

Target Audience

  1. Professional and Amateur Sound designers.
  2. Acoustic and Audio engineers.
  3. Musicians and Performers.
  4. Hobbyists and Scientists interested in learning more about sound.

Since this application would appeal to musicians and a slightly more technical audience, a slightly more sophisticated design is appropriate in this case. A whimsical or more light-hearted design concept would likely be off-putting and not work. The app’s design system should, however, feature interesting and somewhat “techie” design choices. A dark theme would likely be most appropriate since a key feature of the app is visualization. This will be fleshed out further in Part 3 of this series, featuring the clickable prototype built in XD.

Process

The main functionality of this app can be broken down to three primary user-flows. Firstly, the app is intended to allow users to visualize sound data that is either real-time or recorded. Secondly, the user should be able to record a sample, and then have the system intelligently classify the most likely sounds within the acoustical environment. Thirdly, the user should be able to connect with other app users and share samples via an online repository.

Content Map for Sectrafact app user flow.

Filed Under: Art & Design, Technology, Web Development, Writings

Mock Album Covers – Vulpes

July 28, 2020 by Riston Leave a Comment

Vulpes, the Fox-themed band.

“What the Hell?” is likely the first question on your mind.

In brief, I’ve got a bit of time on my hands and have been up-skilling a bit through Coursera. One of the areas I’m branching a bit more into is Graphic Design, specifically beefing up my Adobe Creative Cloud skills. For the specific course in question, the guideline suggests to choose a particular subject (likely an animal) that you will use throughout the course. Having loved foxes since being a kid, there was little thought required on what my subject should be.

Fast-forward —>

One of the optional assignments suggested creating a series of images. I was soon struck with inspiration: a series of mock album covers! The “band” is Vulpes (no relation to the early 80’s Lisbon-based punk band), an entirely fox-themed series of albums! I’ve shared it, despite not by any means being “professional” work, because I’m sure that there are a few folks out there that might actually enjoy the humor in these. Anyway I had a lot of fun making them, and with experimenting with creating textures in Photoshop!

Vulpes:






Filed Under: Art & Design

Bob’s Awesome Music Venue in Fargo, ND!

June 26, 2020 by Riston Leave a Comment

Where to open Bob’s awesome underground alternative music venue.

Machine Learning Capstone Project for
IBM’s Data Science Professional Certificate

1. Description of the problem

Bob Smith has recently come into a modest sum of money, and would like to fulfill his dream of opening a mid-sized music venue where he can book both local and larger performance artists, as well as providing a safe and interesting hangout for not only himself but people of all adult groups. While he has the money to invest, he still needs to be prudent in his use of the funds so he still has some limitations financially. He also wishes to open specifically within the city of Fargo, North Dakota (For what reason only the gods may speculate).

1. Overhead/Rent – He needs to find a large enough space to host events and that would be suitable to house a small kitchen, a bar, a barista bar, and a seating section. He needs space while minimizing rent overhead.

2. Crime – He needs to be somewhere that economizes rent, but where violent crime is minimized, in order to cut down on venue security and provide a safer environment for his patrons.

3. Accessibility To Desired Demographic – Since it will be a music venue, his venue will likely need to be at least accessible to younger college-aged crowds who may not have reliable transportation. Being a music venue, having access to hotels might be desirable, and also being located in relative proximity to nearby places of congruous interests may also be valuable.

2. Background, Data, and Approach

Pricing:

There are a total of 38 neighborhoods within the city of Fargo itself (defined from Zillow’s OpenDataSoft), and median property value (ZHVI) can be also pulled in csv format from their site at https://www.zillow.com/research/data/ . While home value does not equal commercial property, it can be used to make general assumptions regarding relative costs likely associated.

Crime:
While there are crime stats available for consumption, I thought it might be interesting to use a keyword search from Google and to scrape sites indexed there in order to create a crime index. Using Python’s request module to fetch and to Beautiful Soup to parse content from open sites, I compare a selection of keywords associated with violent crime to count the number articles that reference both crime and the neighborhood in question. (This is actually my first attempt at a Crawler/Scraper function, despite coding for quite a few years now).

Categories:

The category index is derived from the FourSquare API’s category attribute, and a list of unique venue categories is generated. From that list, a weighted list is manually generated based on the types of venues that would be indicative of a good area to open shop. Iterating through the venues, an index is created based on the number of most relevant venues within a given neighborhood.

3. Methodology and Exploratory Process:

Neighborhoods and Median Home Value:

For this data, I used Zillow/OpenDataSoft resources. Since there was no readily available neighborhood data otherwise, I had to parse out neighborhood names along with their geo coordinates (both 2d center points and geometrical shape boundaries). The geo data from the neighborhood dataset had to be formatted into a consumable geoJSON structure that could be digested by Folium in order to properly generate the neighborhood boundaries, completed immediately after merging the data from the ZHVI into the initial data frame:

And together this data was enough to generate the following choropleth map:

Crime Reports:

As mentioned, I thought it might be interesting to derive a basic “violent crime index” by running a hackneyed crawler/scraper, relying heavily on the Google Search Console and the the Beautiful Soup module. I set a timer to create a constraint against allowing the program to lag out due to slow servers and other issues that might arrive (disclaimer: I only performed a limited set of calls for this, within the bounds of the free number of Google API calls permitted, which is fairly limited, since I did not wish to adversely affect anyone’s site). I then parsed out page content for a small list of keywords that are directly associated with violent crime to create an index.

This approach, while not very scientific or practical, nonetheless was an interesting experiment. I could use this data to create another choropleth map, this time coding “crime” instead of “median_value”:

Venue Category Listings:

Without knowing anything about Fargo, ND, I had to rely entirely a cursory google search and the data/api’s listed above to work out my approach. The foursquare API was invaluable in knowing what unique categories of venues exist in Fargo. After exploring crime-related news and median house values, it was necessary to begin exploring the neighborhoods in order to derive a list of unique venue categories.

From this list, I was able to select a sublist by hand containing the categories most relevant to the type of venue I’d like to open. I then assigned weights to each of the values, and could then perform a weighted assessment of the relevance of venues within a given neighborhood.

This allowed me to generate another choropleth map, to which I also was able to append the list of categories to neighborhood:

The category list was way to long with un-useful categories, So I pared it down by intersecting the neighborhood category list with the list of desirable categories.

K-Means Clustering:

For statistical analysis, a clustering technique appeared to be most appropriate in order to provide some level of neighborhood segmentation based upon the available data. Given the wildly different magnitudes of data domains (the indexes being low while the median_values were relatively massive by comparison), some preprocessing and standardization of the data was necessary to prevent one variable from completely dominating the others.

I then ran a distortion test in order to determine the ideal number of clusters for this model:

This seemed to indicate that between 4 and 6 clusters would be ideal, so I chose conservatively and went with 4.

4. Results

The resulting clusters derived from K-Means seemed to segment in such a manner that three of the neighborhood clusters were segmented by one of the independent variables, while one cluster was sort of a mediocre hodgepodge of neighborhoods. Only one neighborhood was in the cluster centered on the Category Index variable, and that was the Downtown neighborhood.

The distribution of the values aggregated vs. the category index *please note the category 3 point for downtown is obscured by its group’s corresponding centroid :

5. Discussion

Overview:

Based upon the data, criteria, and analytic results discussed above, the Downtown neighborhood is likely the best neighborhood the open an alternative music venue. While the area does have some crime, the rent is likely cheap and it is proximal to a wide variety of interesting venues, allowing some level of demographic cross-pollination.

Runners-up would be anything from clusters 1 (the Hodgepodge cluster), and cluster 2 (High Crime Index). Cluster 0 (High Property Value) would likely feature very high rent, and little access to other venues of interest:

Personal Observations:

The crime metric was probably the least dependable data available, mostly due to the collection process. I think a news/keyword analysis technique might have some application though, perhaps in providing a low-weight augmentation for a more dependable metric.

The category index approach I think could use refinement, but might be a useful overall to have a way to quantify subjective values in decision-making. Building a list of similar venues, or ones that might be congruous or complimentary to the venue being proposed, and then weighting those categories provides a useful way to ensure the neighborhood you are choosing is likely a good fit. While having a hotel or coffee shop nearby would be positive, its not always indicative of a good location for the type of demographic you may be catering to: a skate-park, gaming cafe, and brewery might be better indicators.

For the clustering results, I should have likely chosen 5 clusters, since that might have split Cluster 1 more effectively, making it less of a hodgepodge cluster while distinguishing a definitive runner-up cluster for a good location. In from personal exploration into the data before clustering, I would have given Roosevelt/NDSU and West Acres votes for 2nd and 3rd choice respectively, since both feature good category values as well as likely have decent rent (Roosevelt/NDSU is second place because it is directly adjacent to Downtown, not rent-wise):

The relevant notebook can be viewed here: Visualize_FARGO_ND

Filed Under: Machine Learning, Software Development, Technology, Writing, Writings

Elements of Color

June 25, 2019 by Riston Leave a Comment

Selecting Graceful Website Color Schemes

Image: Aida KHubaeva

If you already have an idea of what colors you’d like to use, or have already set color associations for your brand, then you are already a bit ahead of this section. If you have not quite determined your colors, then here is a nice infographic to help you get started:

Image by Author

Color is a principle that can be easily overdone, so it is best to simplify the color scheme of an application to one or two colors, not including varying shades of the selected colors or neutral colors such as white, black, and grey. It is very easy to cheapen the look and feel of a website by adding too many colors, with few exceptions. It is best to keep color themes simple and neutral, remembering that usually the goal is to provide a relatively transparent user interface with only minimal distractions.

Filed Under: Guides, UI, Web Development

Introduction

June 21, 2019 by Riston Leave a Comment

An Introduction to User Experience

Image: O12

I imagine the scenario leading to your visit is that you likely have an interest in developing a website, but have little experience with doing so and are looking for a good place to start. If you have not really thought seriously about UX before, and are just need a concise introduction to the subject so that you are familiarized enough with it to be able to communicate effectively with your designer, then this rather brief series is for you. If you already have a well-developed idea of what you want, feel free to skip ahead; although, the since following posts were designed to be short and to-the-point reading through them should not be waste of your time, at least in terms of making sure you have given some consideration to all the basic aspects of design.

We will be covering some of the most important topics for communicating your brand, such as user research, color and typography, and generally making your app easy to use. It will help get you thinking in the right direction so that if you contract a professional developer/designer you will already be a little ahead of the game and able to communicate your ideas more effectively. If you are just starting out in a venture where you are creating a brand, many of the elements discussed will be important first-steps for making sure that your application’s design conveys what you are all about more efficiently.

Filed Under: Guides, Software Development, Technology, UI, Web Development

Know Your Audience

June 20, 2019 by Riston Leave a Comment

Planning the Foundations

Image: Pete Linforth

Understanding your target audience and why they should want to visit your site or use your app is essentially the most important element in considering an appropriate design. Understanding some of the basic principles underlying the psychology of your intended audience, such as why they are visiting your page, what problem do they intend to solve by using your app, or how they will integrate your services into their lifestyle and workflow are key to the success of your project. In a nutshell, your solution should be usable.

The goal of usability is to having an application that is effectively “transparent”, allowing the user to accomplish the intended task without really noticing the interface in-between. The body of knowledge built on top of this area study encompasses an array of disciplines that include art and design, basic engineering principles, and behavioral and cognitive psychology. Fortunately, you do not need an extensive background in any of these disciplines to make useful decisions.

One important consideration for thinking about usability design is the resource of User Research. While not everyone looking to build a basic web site may have the funding to contract formal user research, there are a variety of sources of established insights compiled on the subject. Coglode provides one such resource of easily digested, pre-compiled insights. All of these insights are as effective for developing a content strategy as they are for executing good design principles. Finding ways of providing a productive user experience while not frustrating, belittling, or otherwise facilitating a negative response from the user while guiding them to favorable action is the primary focus here.

Nearly everyone wants a slick design, easy to use interface, including a modern layout (the exception of course sometimes being boutique or niche sites that do not want these traits for overriding reasons). The question is: how do you want to facilitate the users journey?

While this is largely the shared job of both your designer and content strategist; however, it will greatly help to expedite the development process if you have a few ideas on the subject going in.

Filed Under: Guides, Software Development, Technology, UI, Uncategorized, Web Development

ChucK: Scripting For Music Composition and Sound Synthesis.

January 17, 2018 by Riston Leave a Comment

The ChucK programming language was developed by Stanford University’s Dr. Ge Wang, under the supervision of Dr. Perry Cook, for the purpose of music composition and digital signal processing. The ChucK language distinguishes itself from other similar languages by providing a simple, yet elegant, syntax that is both easy for artists that are new to programming yet versatile enough as a language to allow experienced programmers to design complex digital signal processing “applications”. Another of its key features is that it allows users to change code in order to alter real time performance, as well as providing a built in set of interfaces for accepting live input from analog, midi, and other digital sources. The MOOC “Programming for Digital Artists and Musicians”, which is offered through the Kadenze learning platform, is led by an active contributor to the development of the Chuck language, Director of the Music Technology program Dr. Ajay Kapur at the California Institute of the Arts.

Overview of the Language and its IDE, MiniAudicle

ChucK is fully functional as an Object Oriented Language, and is syntactically similar to other compiler languages such as Java and C++, in that it requires the strict declaration of data types for both variables and function parameter signatures. Fortunately, this process is somewhat simplified by allowing “string”to be a native data type, instead of having to import the string library to use in place of char arrays like in C++. The language also allows the ease of use of dynamic, multi-dimensional arrays without having to include outside container classes like in both java and C++. The language also contains the usual slew of default operators allowing mathematical operations, concatenation, plus one very unique operator: the “=>”, or Chucking operator, which covers both variable assignment plus the execution of a particular process.
The language comes with two substantial libraries, the Standard Library for working with data in the program, and the Math library for performing essential mathematical computations such as exponents and trigonometric functions. The Synthesis Toolkit Library, written in C++, is also integrated into the language, and it is incorporated through the languages built-in Unit Generator (UGen) objects. UGen objects also contain a wide range of other objects, most notably various types of oscillators and effects such as delay and other filters. Many of these built-in UGen classes feature a large number of functions that can be used for manipulating basic sound properties such as frequency and amplitude, and in the case of Synthesis ToolKit objects more elaborate functions allowing for physical modeling such as pluck position, phonemes, and string tension.
The standard library also allows for a host of interfaces for live performance, such as Midi, and one of the primary built-in features provides functionality for working directly with both analog-to-digital (“adc” object) and digital-to-analog(“dac” object) conversion. Another key element of Chuck is the necessity of duration, and time is integral to running any program written in the language, and a variable of duration must be “Chucked” or “=>” to “now” in order for the program to do anything at all . The language also allows the development of highly customized classes, that can composed of various unit generators, audio samples, and effects that can be chained together in highly complex arrangements modeling signal flow. These classes can also contain accessors and mutators, and the standard library includes functions for converting frequency to MIDI and MIDI to frequency to allow for simplified scoring and manipulation of object instances.
The miniAudicle is the primary Integrated Development Environment for working with Chuck, and it features three essential windows: the text editor, the console monitor, and the virtual machine. The text editor includes highlighting for Chuck-specific keywords, and the header bar contains the “Start”, “Stop”, and “Add Shred” buttons which are responsible for controlling the virtual machine. Since Chuck also provides the option for working with multiple threads(shreds) and concurrency(adding “Shreds” to run concurrently is called “sporking” in Chuck), having a window showing the active processes is vital, and this is the job of the virtual machine window. The virtual machine window also shows the amount of time that each thread has been executing since it was initialized. The console monitor fulfills the basic functionality of any other IDE console.

Composition Methodology

ChucK’s standard library does not specify specific divisions of Common Musical Notation, such as notes defining pitch or rhythmic duration; however, through utilizing classes the composer can define musical components according to personal preference. By setting a basic tempo using duration, it is a simple process to derive and assign duration variables such as whole, quarter, and sixteenth notes, and to invoke these classes throughout the execution of the program. ChucK’s duration type allows for a time resolution as low as a “sample”, the same constant derived from the Nyquist theorem of 1/40,000th of a second, up to a week( I would posit that this lengthy duration’s usefulness would be limited to soundscape installations).
Similar to common notation standards for rhythmic duration, the lack of notation indicating pitch can easily be worked out by the designer/composer. Since the standard library allows for the two-way conversion of MIDI notes and frequency values, it is easy to define a scale using an array structure where the values are MIDI pitch values. The versatility of the program also would allow the designer to define completely customized intervals using any tuning specification desired, such as designing one’s own array of intervalic ratios and applying them to a base frequency.
Scoring is also highly customizable, and the use of classic programming loops and other control structures provides a versatile medium for writing and composing music. For loops and while loops can have their iterations regulated through a series of duration values, such as “beats”, and conditional statements can determine the execution of specific code blocks based on any relevant boolean expression. All of these control structures can in turn be wrapped into functions and classes, in order to divide a score into easily read sections.
One of the most useful applications of ChucK; however, is its versatility as a sound synthesis engine. Using the basic oscillators and more complex STK instrument unit generators provides comprehensive building blocks for applying additive and subtractive synthesis. Arrays of oscillators can be created, and each can have its fields accessed and mutated by index. Synth pads can be created by chaining a variety oscillator and STK objects through effects and digital filters. It is easy to even design granular synths that take wav samples and partition them by divisions of ChucK’s sample duration.

Final Project/Experimentation

For the final project of this MOOC I was bound by substantive limitations, which while certainly not the best piece of music that I have written, was nonetheless interesting. I defined the scale as:

[50, 52, 53, 55, 57, 58, 60, 49] @=> int dMin[];

which contains the essential MIDI notes for the d minor as the name implies. I generally also found it to be useful to define basic patterns as 2d arrays, making it easier to distinguish between pitch and duration values:

[[2, 1, 4, 0],[0, 6, 8, 14]] @=> int bowPat1[][];

Also control structure could easily guide the execution of the score, as in this segment:

Machine.add(me.dir() + “grain.ck”)=> int grainId;

while (measure < 60){
if (measure < 2){}
else if (measure >=2 && measure< 4){ voxPlayer(beat, voxPat1, dMin[0]); }
else if (measure >=4 && measure< 6) { drums(beat, drumA, 2); voxPlayer(beat, voxPat1, dMin[0]); }
else if (measure >=6 && measure< 10) {
etc………..

The full code for this assignment can be viewed on Github . The initialize.ck file serves a very similar function to the traditional makefile, and the Machine object in the ChucK language provides a intuitive means to provide compiler instructions.

Conclusion

The ChucK programming language is extremely promising as a way to provide a new tool for musicians to not only add to their creative palette, but to work directly with sound itself. One of the benefits of using an environment for music creation and sound manipulation like ChucK is that the artist is no longer confined to the limitation of their chosen DAW, and in many cases ChucK can be interfaced with other DAWs. One of the challenges for an artist using ChucK, as with most other forms of generating computer music, is that it is difficult to incorporate a truly human feel. However, it is possible to do to a large degree, and in my personal experience I have found that ChucK works well for designing aspects to integrate into more traditional methods of music-making.

Here is the final project I did for this course, a little cold and computerized, but not terrible:

Final Project

Also, a track on Bandcamp where I had programmed a basic granular synth to highlight the bridge toward the end of the piece:

Reference:

Kapur, Ajax. Programming for Musicians and Digital Artists. Manning Publications, Shetland Island, NY. 2015. *This book is supplementary to the Kadenze Course by the same name: Introduction to Programming for Musicians and Digital Artists

Filed Under: Music, Music Technology, Software Development, Writings

Personality-Informed Neural Training for Cyber-Security Solutions

December 16, 2017 by Riston Leave a Comment

Image courtesy of Geralt.


“If you know your enemies and yourself, you will not be imperiled in
a hundred battles… if you do not know your enemies nor yourself,
you will be imperiled in every single battle.”

-Sun Tzu

Introduction

Information security is presently one of the most rapidly expanding fields in the realm of information technology due largely to the complexity of emerging interoperable networks. Contemporary networks contain more than just laptop and workstation computers, and while mobile devices such smartphones and tablets are surpassing traditional machines in consuming a greater percentage of network resources, the variety of devices that are interoperating is increasing further with developments in more pervasive technologies such as “smart buildings”, the Internet of Things, embedded software as found in self-driving vehicles, and medical devices that are capable of wirelessly transmitting information. The phrase “complexity is the enemy of security” has become axiomatic in the cyber-security industry, and the increasing complexity of network systems has provided entirely new planes of attack vectors that have rendered many traditional strategies to be effectively useless.

Techniques and algorithms involving machine learning and adaptive artificial intelligence are also growing, and there are many firms who are working to integrate machine learning techniques into security protocols. Attacks on a networked system can manifest in a multitude ways, ranging from basic web based attacks involving cross-site forgery and SQL injections to more sophisticated orchestrations such as Distributed Denial of Service or Advanced Persistent Threat attacks. “Cognitive computing scans files and data using techniques such as natural language processing (NLP) to analyze code and data on a continuous basis. As a result, it is better able to build, maintain, and update algorithms that better detect cyberattacks, including Advanced Persistent Threats (APTs) that rely on long, slow, continuous probing at an almost-imperceptible level in order to carry out a cyberattack.”[1]

Within the last few years, analytics has also provided insight into determining some of the psychological characteristics of computer users based on social network behavior patterns, thus opening the door to using analytic techniques for discerning personal traits of potential threat agents. Being able to gain insight into the personality of attackers themselves may yield useful information that could provide leverage for an adaptive system to not only detect but effectively defend against an attack. Integrating adaptive AI techniques such as Deep Neural Networks with cybersecurity objectives may be the most effective approach to solving the increasing surface area of attack vectors in modern and emerging networks, and the efficacy of this approach could be greatly enhanced by using psychological determinants that would enable the construction of strategically useful threat models in real time.

Assets, Threats, and Current Practices

One of the first steps required of any organization when developing a security policy is to accurately asses that organization’s assets, relative to both their intrinsic value and to the collateral damage that could be caused by those assets either being rendered unavailable or exploited by malevolent actors. While understanding the value of an organization’s assets is generally useful for determining the appropriate measures for securing a given network [2], understanding the nature and value of assets can be useful for providing insight into building effective threat models. Understanding common characteristics of threat agents such as intention, motivation, and their source can provide a useful basis for the organization to build a taxonomical hierarchy of potential threats[3]. Both the classification and prioritization of these various threat agents can used to provide features and rules for informing the training procedure of an AI’s neural network.

Data mining using social media networks have provided useful resources for researching the potential for predictive personality modeling. One such study, reported in 2013, used a variety of features including linguistic and other social network patterns to determine personality characteristics, and the results were effective enough to encourage future research in this field [6]. The measures of personality used for this study included what are termed the “Big5 test”, which comprise the determinants of Extroversion, Neuroticism, Agreeableness, Conscientiousness, and Openness. It is likely that further research in this domain could render insights into common threat agent attributes such as skill and motivation, or indicating whether they are operating purely motivated by personal gain or anger. This may, in turn, help an AI to effectively exploit the attacker’s personality weaknesses in order to inform an appropriate strategy.

The most common implementations of network security involve both Network Intrusion Detection Systems(NIDS) and Network Intrusion Prevention Systems(NIPS), and most applications of these systems are a composite of both approaches. Signature based detection models have traditionally been the most common approach to detecting attacks; however, with the increasing sophistication and variety of attack methodologies, this approach is proving to be ineffective as a stand-alone solution. Researchers have turned to refining anomaly-based detection methods, but in its current development, this approach is still challenged by often yielding false positives for otherwise normal network behavior. [4] These short-comings for ADNIDS have been successfully mitigated by the adoption of deep learning techniques for accurately classifying network anomalies. [5]
Basic Neural Networks and Current Strategies

The concept of neural networks as a paradigm for designing adaptive artificial intelligence has existed for nearly fifty years, and the original construct of an artificial neuron was the perceptron. The perceptron, developed by Frank Rosenblatt, is essentially a function which accepts a combination of binary inputs in order to produce a single binary output. The most common adaptation of the perceptron used in contemporary models is the sigmoid neuron, which is a perceptron that allows for both weighted inputs and a bas factor for the neuron itself. The weighted input and bias attributes of the sigmoid neuron help to facilitate more effective decision making for the algorithm as a whole, and the training of these neurons involves the ability for the specific weights and bias attributes to adapt according to the information provided to it. [7]

The architecture of deep neural networks are comprised of essentially three classifications of neurons: an input layer, an output layer, and a series of “hidden” layers in-between. The number of these hidden layers varies according to the specific implementation, and a greater number of these intermediary layers allows for more specialized training of the network.[7] Approaches to training neural networks include supervised, unsupervised, and semi-supervised training, with self taught methods considered the most valuable avenue of research for future implementation. The efficacy of a given neural network implementation is generally judged according to its accuracy, and the metrics of determining accuracy are defined as precision, recall, and F-measure, the last being the harmonic mean between precision and recall. [5]

In research, most deep neural network implementations are trained using the Network Socket Layer – Knowledge Data and Discovery dataset, the most pervasive version being the KDD Cup 99 dataset. These implementations are generally used to parse through network logs in order to detect anomalies in network activity, such as unusual packet volume or other user activity. When discussing the viability of deep learning strategies, and unsupervised approach is considered to be the most useful approach, and one methodology includes rule-based clustering, which allows the programmer to establish specific rules and objectives for the algorithm while allowing the network to determine its own categorizations.

Dynamically Incorporating Personality Into Threat Models

Persona non Grata is a threat modeling approach that specifically tasks users with modeling threats according to an attacker’s potential motivations and abuses; however, similar to the signature based NIDS, can be limited to only a predefined subset of threat agents.[8]Threat agent personality characteristics, at least of the intentional variety, can probably be effectively reduced to a specific subset that can serve as a selection of rules for defining the features of a neural network. By both defining anomalous network activity in conjunction with being able to appropriately respond to a given threat based upon its distinguishing characteristics should be the primary goal of the the neural network.

In order to generate a normalized baseline of network activity, it is necessary for the implementation to be able to construct accurate user models in order to determine that the user is an authorized operand of the network. One possible strategy for implementing user profiles is by implementing a silent application of cognitive and behavioral biometrics, such as keystroke dynamics, that could be developed dynamically over time.[9] Using such a practice could help determine if an attack is being orchestrated through compromised access controls, such as a password that had been hacked. This level of detailed user profiling could help establish and maintain a more accurate baseline of network activity, while also detecting compromised accounts.

Defining attacker characteristics and normal network activity would provide a very useful and dynamically configured subset of rules whereby a neural network could train itself and adapt in perpetuity. Since these algorithms operate by continuously scanning through a stream of network logs and other network data, it is important to implement an algorithm that can initiate a Dynamically Expanding Context of analysis while making certain that unimportant anomalies are properly discarded in order to avoid unnecessarily invoking defensive and emergency procedures. This could manifest through a series of virtualized scenarios, such as when designing a predictive algorithm for a chess game, and a pre-defined hierarchy of procedures could be initialized based on stochastic considerations of these virtualized scenarios.

Ethical Considerations and Conclusion

As with any case of invoking artificial intelligence in relation to predicting and monitoring personality attributes, there are ethical considerations that must be integrated into the development process. In profiling user activity, it is important to not allow the algorithm to reveal what is potentially embarrassing or exploitable information on the user, especially if the user’s activities are in compliance with the organization’s use agreement. There is the likelihood that data from using predictive algorithms could be used to execute discriminatory bias against minorities or persons with underlying mental conditions, such as in the case of criminal risk scores [10]. For these reasons it is important that ethical considerations be incorporated into the design process, that there be limitations to the application’s offensive capabilities, and that there should be included sufficient administrative override.

Beyond the mentioned ethical concerns, incorporating personality traits common to threat agents into rule based neural network training has the implication of providing an invaluable toolset to the development of future models of integrated security systems by allowing the AI to essentially “get into the head” of a malicious attacker and exploit their natural inclinations to their disadvantage. An attacker predisposed to irritability and neuroticism could be goaded into making a mistake out of increased frustration, or perhaps if the AI determines that the attacker is financially motivated and is not technically proficient, could be tricked into providing personal identifying information by exploiting their desire for money. This approach could save an organization resources wasted on unnecessary downtime by properly defining normalized user activity through personalized biometrics against which to accurately detect anomalous network activity.


References

1. Greengard, Samuel. “Cybersecurity Gets Smart”, Communications of the ACM, Vol. 59, no. 5, pp. 29-31.
2. Merkow, Mar S. & Breithaupt, Jim. Information Security: Principles and Practices. Pearson Education Inc. Indianapolis, Indiana. 2nd ed.
3. Join, Mouna; Rabai, Latifa Ben Arfa; Aissa, Anis Ben, Procedia Computer Science, Vol 32, 2014. pos 489-496. Classification of Security Threats in Information Systems
4. Lambert, Glenn Monroe. Security Analytics: Using Deep Learning to Detect Cyber Attacks. University of North Florida School of Computer Science, 2017.
5. Quamar Niyaz, Weiqing Sun, Ahmad Y Javaid, and Mansoor Alam. A Deep Learning Approach for Network Intrusion Detection System. College Of Engineering The University of Toledo.
6. Dejan Markovikj, Sonja Gievska, Michal Kosinski, David Stillwell. Mining Facebook Data for Predictive Personality Modeling.AAAI Technical Report WS-13-01 2013.
7. Michael A. Neilson. “Neural Networks and Deep Learning”, Determination Press, 2015.
8. Shull, Forrest. SEI Blog, Nov. 11, 2016. Cyber-threat modeling: an Evaluation of Three Methods.
9. Ciampa, Mark, Security+ Guide to Network Security Fundamentals. Engage Learning. Boston MA, 5th ed. 2015.
10. Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner. Pro-Republica, May 23, 2016. Machine Bias

Filed Under: Machine Learning, Security, Software Development, Technology, Uncategorized, Writings

Next Page »

Search

Tags

Adobe Illusrator Adobe XD audience Behavioral ChucK Cognitive Color Theory Composition Creative Computing Deep Learning Design DSP Foxes Music Technology Neural Network Prototype Sound Design Spoof Album Covers Synthesis UI user research UX Vulpes

Categories

  • Art & Design
  • Guides
  • Learning Technique
  • Machine Learning
  • Music
  • Music Technology
  • Mythology
  • Philosophy
  • Security
  • Software Development
  • Technology
  • UI
  • Uncategorized
  • Web Development
  • Writing
  • Writings

Music

Copyright © 2022 · Genesis Child 2 on Genesis Framework · WordPress · Log in