My professional interests in CPTED (crime prevention through environmental design – defensible space) linked to early versions of machine learning…began in the late 1980’s, while teaching a ‘security studies course new academic at Southern Illinois University.
I and colleagues at universities and corporations were recognizing…the relevance in how the ‘built environment’ could affect the receptivity, vulnerability, probability, and even criticality for crime occurrence.
The concepts – constructs of CPTED and ‘defensible space’…originated with C. Ray Jeffrey and Oscar Newman respectively.
In 1972, Newman published a strong critique…regarding the design of America’s public housing environments, i.e., the crowded, limited space, high rise structures which rapidly morphed into havens for bad actors, criminal activity, and consistent fear by residents.
My view at the time, and remains so today, is…Drs. Jeffrey and Newman and practitioner Tim Crowe, and perhaps me to a lesser extent, could articulate – demonstrate how the design of a ‘built environment’ could mitigate, if not prevent crime, and the fear of crime, by creating…
- stronger senses of personal territoriality, and
- positive behaviors associated with that territoriality.
Newman, Crowe, et al aggressively promoted these architectural design perspectives…for built environments that…
- reduced resident anonymity.
- increased the sense of natural surveillance.
- would be less supportive – receptive to undesired behavior and/or criminal activity
Soon, the principles of CPTED and ‘defensible space’ became practically speaking, interchangeable…and, in 1991, criminologist Tim Crowe (University of Louisville) through his CPTED – defensible space writings and guidelines, demonstrated, on larger scales, i.e., neighborhoods and communities, how opportunities to commit crimes could be demonstrably reduced – mitigated by employing those design principles into built environments.
Too, operational clarity-relevance to (a.) rational choice, and (b.) routine activity theories, have consistently shown ‘connectivity’, i.e., influence – relationship to ‘opportunity’ for criminal activity to occur.
To be sure, some criminologists were not especially enamored with the principle premise of CPTED…particularly, deploying CPTED influenced design to the ‘built environment’ would merely lead to (crime) displacement.
CPTED delivers intangible assets…through its focus on effective design and (peoples) use of the built environment in a manner that mitigates peoples – users fear – expectation of crime within their territory.
CPTED’s primary objective being to…reduce-remove the opportunity for crime to occur in an environment, and promote positive (people to people) interaction within designated (CPTED influenced) spaces by legitimate users. (Adapted by Michael D. Moberly from Greater Manchester, UK ‘Design for Security’).
1990’s version of machine learning…
In early 1990, I was fortunate to collaborate with Jon Davey…a forward looking – thinking architecture professor in SIU’s School of Architecture. Davey shared my interest in practical applications for CPTED relative to designing built environments.
Our collaboration focused on programming CPTED principles into CAD machines…(computer aided design), ala large, desktop computers. Yes, while this was the early 1990’s, it was legitimately, an early form of ‘machine learning’ albeit rules-based (computer) programming.
What we sought to collaboratively accomplish…and did so with varying degrees of success, was to ‘program architecturally nuanced principles of CPTED into a CAD machine. This machine (computer) in turn, would alert CAD users (designers of built environments) when a particular-design feature breached a CPTED principle and/or requisite. A ‘prompt’ would appear on the machine’s screen awaiting (architectural) modification or override as the circumstance and/or (design) warranted. We coined this ‘good – bad CPTED’.
Machines shaping – framing their own rules…one example occurred in 2007 when Fei-Fei Li, then head of Stanford’s Artificial Intelligence Lab, stopped trying to program computers to recognize – distinguish specific images. Instead, he began labeling millions of raw images which, it was known, children would likely encounter by age three. These images were, in turn, fed into computers. As volumes of labeled data sets (images) were introduced to a machine, the machine could begin to shape – frame its own rules for deciding whether a particular-set of digital pixels accurately reflected the images introduced, i.e., a cat or dog.
Thus, professor Fei-Fei Li, began teaching computers to understand pictures,” Last November, Li’s team unveiled a program that identifies the visual elements of any picture with a high degree of accuracy. IBM’s Watson machine relied on a similar self-generated scoring system among hundreds of potential answers to crush the world’s best Jeopardy! players in 2011.
Machine learning is not yet quite akin to human learning…however, machine learning does, quite well at graze through extraordinary large amounts of data combinations and variables to now be a relatively mainstream management tool. Proponents of machine learning, rather convincingly argue, now is the time for management teams to achieve familiarity, because of the competitive significance of business models and strategic planning derived from machine learning will advance and surge.
Machine learning today…is no longer the preserve of artificial-intelligence researchers or companies ‘born-digital’ ala Amazon, Google, Netflix, etc.
Machine learning is based on algorithms which ‘learn’…through the acquisition of (big) data without relying on rules-based programming and/or inputs. Machine learning developed as a new – distinct scientific discipline beginning in the late 1990s commensurate with…
- advances in digitization, and
- computing power which was becoming significantly less expensive.
Collectively, these advances in machine learning…along with the increasingly unmanageable volumes and complexities associated with growing demands-uses for big data, influenced data scientists to devote less time to building finished (computer) models, instead, ‘train’ computers to do it themselves, ala machine learning.
There is an interesting claim made recently by Ram Charan…in which he suggests that “any organization that is not a math house now, or is unable to become one soon, is already a legacy company.
- (Adapted by Michael D. Moberly from Ram Charan, The Attacker’s Advantage: Turning Uncertainty into Breakthrough Opportunities, New York: PublicAffairs, February 2015).
- Other adapted by Michael D. Moberly from ‘An executive’s guide to machine learning’. McKinsey Quarterly – June 2015, authored by Dorian Pyle and Cristina San Jose, and TED Talk, March 2015 at ted.com.
…the person who elects not to read has little or no advantage over the person who cannot read! (Variously attributed to Samuel Clemens, adapted by Michael D. Moberly)
Michael D. Moberly September 13, 2017 St. Louis firstname.lastname@example.org, the ‘Business Intangible Asset Blog’ since May 2006, 650+ blog posts published, where one’s attention span, intangible assets, and solutions converge!
Readers are invited to explore other published blog posts, video, and position papers at https://kpstrat.com/blog