Source: Can We Trust Autonomous Weapons
Why listed? This article is from an older edition of ACM from December 2016. I am doing catch up on my subscriptions. It covers an article on machinery that worried me since I saw it fictionalised in the first Terminator movie when they showed the scenes from the future with armed robots and aerial attack vehicles taking on the human resistance. This piece discusses the ethical issues in autonomous weapon systems today, aspects like human control regardless of the algorithms that are driving the decision-making processes within such systems, and what is being done to try and put policies in place to protect people. There are some scary systems being developed for security and defence – take a look at the links at the end of the article. Protective policy, however, is not keeping pace with the fast moving technology and they warn about this in the article. This is going to become a bigger issue in the years to come, given the level of military R&D spend into such systems.
Source: Learning to Learn
Why listed? Another one from December 2016 ACM issue as am reading it on the bus as I type. The learning process fascinates me as it is something I always looked at in myself when learning a new skill whether it be sports (swimming, pilates, kayaking) as I only learned and still am learning them as an adult. Or learning a new technical or manual skill, or learning about a general-interest topic like wine production, or whatever it might be. I had a really good teacher in school who said we must be prepared to keep learning, up-skilling and being adaptive throughout our lives; that was the mid 1990s, so it was good that this teacher even then was advising to be wary of any “job for life” scenario in case we got complacent. I took his advice on board and regardless of age have tried to continue following it. As you age it’s easy to go why bother; I know how to do this task pretty well; I’ll look stupid if I move out of my comfort zone to learn a new area as you have to become a beginner all over again. I find what motivates me always in these scenarios is trying to understand the system, learning tips and tricks from colleagues, and the chance to develop new skills and tools to support your workflow better. This article seems to require subscription access, but is a very good read. Not sure if you have to be a paid member or you can access it by just registering. In any case, I added a related article that is accessible on the same topic. The articles don’t just focus on some specific skill; they talk about learning to do stuff that makes you ultimately more effective at what you do, not just as an individual, but as a team if working as part of a larger unit.
Why listed? My tools and shortcuts are a topic again, so above resources are lists of go-to quick reference command shortcuts for the IDEs. I use Eclipse mostly as I really liked Neon, but am experimenting with Intellij on and off the past few months. I had been very focused on my OSX toolset and workflows the past few years, but I am using a mix of environments again, so I need to revisit my setup and workflows to see if areas can be improved. Even though I had set up a bunch of assistive tools for search and navigation in my workspace, I was noticing small delays even if they only seemed like a few seconds; particularly with context switches and even keyboard shortcuts were not doing it for me. Colleagues each had their tips and tricks and these were invaluable, but I still felt my workflow was not as fast as I would like it. You can be efficient within the IDE, the text editor or whatever, but the problem as I was trying to really see what it was for me, was the integration between them all, as they are all from different vendors and open source solutions. Once I switch out of them, that’s when small but noticeable delays occur if even a fraction of a second or x seconds. Still a delay and it breaks flow. What I really want is the system to keep context based on my eye movement in tandem with my keystrokes and ideally the workflow in my mind, so it brings up the UI component I need based on the exact point where I was and jumps forward based on my intent before I issue a command regardless of the hardware and software input method and commands, similar to the autopilot function in a plane. As I Googled my random queries about whether this was possible, I found this cool area on StackExchange called WorldBuilding. It doesn’t answer my question, but interesting comments there about GUIs from different camps on that thread. After skimming this Wikipedia resource on Brain-computer interfaces by different topics, there is plenty of active work on this domain and in various application areas. This is a list of research groups from Brain / Neural Computer Interaction (BNCI).