I HAVE SUCCESSFULLY extended the upgrade cycle for our mobile phones and now expect the handsets to work at least five years before being refreshed. Today, I've started pushing the functionality of an iPhone we bought in 2018 by using it as a daily transcription device.
I hope to spend the next two weeks working with Dylan (12) as he documents happy moments and as he explores some of my most memorable blog posts and photographs. We're starting with Dylan recalling a story he heard told about a local shop by our friend Simon. Here is how the conversation appears on Otter.ai.
This is Dylan talking to Bernie. Dylan is going to explain something funny that happened in Questum. So did anything funny happen in a story in Questum that you heard from Simon?
What was the funny thing? TK Maxx? Simon works in TK Maxx. Where does he work in TK Maxx?
What else does he have to do? And how does he go upstairs in TK Maxx?
What happened in the lift one day with Simon and TK Maxx?
An old lady was in the lift and didn't press any of the buttons just sitting there. He asked her, "Oh, how long were you sitting here and did you press any of buttons?" and then she didn't and then then she clicked the button and then he put the button on. And it went up.
What happened? Do did she think the store is really big?
Yeah. So she kept going up. And she didn't realize she had gone up before.
We should make stories about Simon. Let's see if this makes a story that we can see online. Okay, stories from Simon.
I want to refine a process where we create four topics for discussion in a written outline form. Then use the cracked iPhone to talk about the outlined points. And as Otter.ai renders our conversation, we edit the transcript and share the result as a blog post.
I think there's value to this workflow and believe I should encapsulate it as part of a Digital Literacy workshop in one of my Notion databases.
If you have ideas that would embellish this speech-to-text-to-post workflow, please let me know.
Bonus Link: Photos snapped with our long-serving iPhone 8.
[Bernie Goldbach teaches digital transformation for the Technological University of the Shannon.]innovation
by Bernie Goldbach in Dungarvan
ONE OF THE BEST discoveries of the summer of 2023 has been the very cheap and cheerful Local Link 356 service connecting Clonmel in County Tipperary to Dungarvan in County Waterford. In my experience, it is a well-used service.
The second time I visited Dungarvan was to sample the West Waterford Festival of Food. It was the first event I attended in Ireland where my handheld maps helped me discover some real gems. I wish all other Irish festivals would follow the simple and effective mapping of the West Waterford event planners.
Before we drove to Dungarvan for the festival, I had already discovered the food festival was on Google Maps. This meant it was also on my mobile handset.
Today, I discover some of the best tidbits about #wwfof venues now appear on Facebook, not on TwiX. And if I use Google Maps, nearly every one of the festival venues has a pinpoint location and a review. This means and it's very easy to identified and share the pinpoints by SMS or as social networking updates.
Back in 2014, Microsoft had Here Maps running on my Nokia Lumia. Today, that service has pivoted to an exceptionally robust spatial computing service. It appears the Bing Search Crawler is seeking out and indexing evidence it finds of Here.com so I'm adding a reference to that URL as a piece of meta data to this blog post.
Ten years ago, many of the first generation Twitter users in Ireland were using Foursquare to talk about the Festival of Food as well as checking into a few of the venues. In 2014, the Foursquare recommendations I saw on my handset pointed me back to handy reminders of cafes and bakeries I had visited in the area before. Every time we take the Local Link 356 service to Dungarvan, my handset will be able to give me personalised recommendations long after the #wwfof stream of Facebook comments fade away.
In 2023, I continue to believe that the best fish and chips in Dungarvan are served by the Anchor Bar.
I wish every festival organiser in Ireland would cross-check Google Maps and Trip Advisor and add agendas with tips for visitors. Doing this would transform the visitor experience. I know we make much better use of our time in Dungarvan by being able to see comments from visitors and proprietors.travel
[Bernie Goldbach teaches digital transformation on the Clonmel Digital Campus for the Technological University of the Shannon. He snapped Grattan Square while on a local link service to Dungarvan from Clonmel.]
by Bernie Goldbach in Clonmel
After one semester of integrating AI into my daily lab sessions, I can assuredly say AI has made a profound impact on the way our UX designers work. Leveraged purposefully, AI enhances creativity, improves personalisation, and trims workflows. As I review my early 2023 teaching practice, I want to offer a few observations about using AI while teaching creative media students in a university setting.Aiding Accessibility and Inclusivity
Without even coming close to ChatGPT, people can take some simple steps to ensure their use of AI can enhance accessibility. It starts with reviewing the alt-text tags that Microsoft generates inside programs such as Word and PowerPoint. Microsoft's AI technology tries to make digital experiences more inclusive by automatically creating text descriptions of images. The content creators should review that AI-generated content. To be truly accessible, web sites should have alt-text for images that are optimised for screen readers. This is happening behind the scenes already which means AI contributes to a more inclusive design approach. I'm a big fan of adding meta data to online folders and directories because those ReadMe files help me locate locations of files I've saved since the 90s. I've discovered the Markdown files I'm adding to OneDrive and Google Drive locations help me locate information faster when I ask the respective mobile apps questions by using my mobile phone.Creative Possibilities with AI Tools
In the Web Content Management Systems module that I taught, we used ChatGPT+ to quickly produce code snippets. Students learned to create prompts for the AI to generate design elements, suggest layouts, and offer colour palettes based on user specification. This means I could reduce the amount of time students were given to produce mock-ups and increase the time for collegial review of the creative process.Intelligence about Personas
A good UX developer can create engaging content and clever interfaces. We discovered we could ask both Bing and ChatGPT+ to write content for specific audiences. If we didn't like the result, we could revise the prompt and regenerate a response. Our results in labs revealed tailor-made mock-ups of interfaces and content. Doing this meant we had to cross-check the upstream sources used by the AIs to validate if the suggestions were valid. By checking these footnotes, students achieved a higher level of digital literacy.Enhanced User Research and Insights
We provide eight semester of UX training to our Creative Media and User Experience Design students. They learn to understand user needs and behaviors. These students also learn how to leverage AI-enabled tools to analyse vast amounts of user data. An AI can review hundreds of data points concerning user patterns, pain points, and expectations. We also use a "visitor paths" function of Statcounter to make informed design decisions about text enhancements, video placement, and photostreams. An AI can serve as a secondary evaluator for mock-ups, ensuring User Interfaces are not only aesthetically appealing but also highly functional and user-centric.Intelligent Voice Interfaces
My 12yo son talks to handsets, smart speakers, and laptops. Voice interfaces have become integral parts of Dylan's computer literacy. He often argues with the Alexa or a weather app if he doesn't believe its answers. I think his interactions more human-like and intuitive. Alexa uses Natural Language Processing and Machine Learning to better understand user intent, anticipate what we need, and to deliver seamless and contextually relevant responses. If I have a teaching block that extends two or more hours, I set up a calendar reminder to prompt me to show a live session with Otter.ai during which I ask myself what I've done in the preceding time slot. And I occasionally ask overhead questions of students. They can see the transcript generating dynamically on the large digital screen as we produce a five minute recap of our teaching period.Streamlining Design Workflows
Most of the criticism I've read about AI in academic environments concerns the end-user experience. I've discovered AI can also optimises the design process itself. AI-driven automation tools can handle repetitive tasks, such as image resizing, content tagging, and data analysis. Services like Divi Builder inside WordPress or the AI inside Notion can free designers to focus on more strategic and creative aspects of their work. This increased efficiency means projects can finish sooner, content can publish more frequently, and the creative students who master these workflows will be able to produce greater client satisfaction when on the job.We need to validate the above findings.
I'm blogging about the impact that AI has in my classrooms because I hope to spark a collegial discussion about where we're headed. AI continues to evolve. We need to understand its influence on the UI/UX design field and we need to equip our graduates with tools they can use to get the best results. I believe we should be at the forefront of this transformative journey by harnessing the power of AI to craft captivating, personalized, and user-centric experiences. I've seen how it's possible to unleash student creativity to revolutionise user research and to optimise design workflows. I believe we need to embrace AI-driven toolsets, starting with students using these tools to push the boundaries of what’s possible. By doing that, our graduates will be able to thrive in an AI-driven era of design.innovation
by Bernie Goldbach in Clonmel
I SPENT 55 MINUTES on a catch-up Zoom Call with journalist and thinker Karlin Lillington today and want to mark my timeline with the event. Although the purpose of the call was to share what we think about the Fediverse, some of the preliminary commentary helped me resurface some important references to shared experiences. I need to sharpen the screenshot of Karlin before Google Images adds it to its collection.Shared Appreciation of Rehomed Animals
I brought Kerry, our rehomed Bedlington Terrier, to the Zoom call. Kerry has irritable skin under her belly fur so I need to take her for monthly injections. We also have a spray that soothes the itchiness. But 24-26 days after each injection, Kerry gets irritable again. So I use Apoqel tablets to control the irritation. Kerry also has ear problems so there's a lotion for that condition.
The interesting fact about her monthly treatment is Kerry's favourite vet, Niamh Buck. I think Niamh has a Bedlington Terrier as well because she knows all the anomalies of the breed.
Karlin knows all about these issues because she has Westies and Cavaliers--affectionate and playful pets.Media Literacy Skillset
For more than 20 years, I've read Karlin Lillington's columns in The Irish Times and in The Guardian. Karlin has a sophisticated grasp of media literacy. That's part of my teaching practice. Media Literacy for young teens is a special interest of mine. I can see a time when our shared concerns coalesce as a partnership in a Community of Practice in collaboration with Media Literacy Ireland or Erasmus.The forest out back
Even though the Surface Book I used had a rear camera that could have easily shown the rural view we have in our home, I didn't show the view. So I'll offer part of my Flickr photostream to illustrate one of our crown jewels: the Ice House.
We discovered the structure abutting our property line when I excavated 10 tonnes of dirt during COVID. I did it one wheelbarrow at a time. That seems so far away now because I'm currently hobbling around with a weak right knee.Youth Media and Short Form Reporting
I explained how Dylan (12) conceptualises and creates his video clips for his Little Bit of Tipp series on YouTube. He gets an idea that he can write down. Sometimes he cheats and asks Google to write the sentences for him. I make him write out the letters on a paper, revise the questions, and then send the letters to a prospective interviewee. In early 2023, Dylan want to interview Racheal Blackmore, the prize-winning Irish jockey.
Thanks for Father Jimmy, the pastor of Killenaule, Rachael got Dylan's letter and she agreed to an interview. I reckon Dylan has replayed that YouTube interview 100 times. He critiques himself for the way he asked the questions and he suggests ways to enhance the production values.Working with Curmudgeons
Back when I had an NUJ card with the Irish Examiner, I had to interview A-listers about trends in technology.
Both Karlin and I had memorable interactions with the same curmudgeons. I'll let sleeping dogs lie and just day that I appreciate the emotions some people may exude when they think they've been door-stepped.Tech Headaches
I have an aversion to always updating templates and then paying more every year for the same small patch of online real estate. This aversion is the biggest reason I still write on a blogging platform that is Perl-based. I pay less than €250 annually and have no restriction on bandwidth, storage, or domain names. Karlin shared her current project with the relaunch of her Cavalier Forum.To be continued
The purpose of our Zoom call was to get my reading on what's happening to many of the original gang of Irish people who set up Twitter accounts between 2006-2008. I'll offer a sophisticated answer by changing the question in a follow-up blog post. In the meantime, you can scroll the photos I've shared on my MicroBlog.
[Bernie Goldbach teaches digital transformation on the Clonmel campus of the Technological University of the Shannon. Karlin Lillington's weekly technology column normally appears in the business section of the Irish Times.]
I'VE RESTARTED my daily flow with Notion as part of my Personal Knowledge Management, spurred on by Larry G. Maguire and helped massively by Sarah Brennan's excellent YouTube tutorials. I'm writing this blog post to remind myself I owe RedGregory at least five coffees because of her free templates and excellent online teaching style.
I'm taking time to note what's pulling me back into Notion before rewatching the six videos listed below.
- How to build a second brain in Notion
- Notion for productivity
- Notion Buttons
- Weekly Calendar for Task Planning
- Notion Planner
- Notion Shortcuts
The major reason I'm back into Notion is because several top students have used it to achieve laudable results. I want to boost their upward trajectory by porting most of my academic modules into Notion so they can show me how my Notion content looks inside their Notion workspaces. This will be excellent 360 degree feedback for me as a university lecturer. I've mentioned before that I need to cull my haphazard content from Notion, streamlining the process from capturing content to reusing materials.
Besides feedback from my top students, I've been nudged by Larry G. Maguire to mock up a digital transformation teaching programme we can use with students. I've borrowed a template from Monica Rysavy and that has accelerated my work.
I'm still a daily user of Obsidian, mainly because I like its speed on my handset and its failsafe sync with my desktop. It's relatively straightforward to share content between Obsidian and Notion but I will need to tweak the workflow. I like having back-ups to personal knowledge management but I also need to ensure I can quickly use material I discover from subscriber-only content that drops onto my mobile screens.
I'm impressed to discover that most of my Android apps can easily share content via Notion. All of the apps shown above work together. Office Suite lets me link PDFs from its mobile storage into Notion. And I can embed Netflix videos inside pages on Notion.
Previously: Tweet Talking about NotionPKM
by Bernie Goldbach in Waterford
WHILE ATTENDING a hands-on training session in Waterford, I was challenged by Mark Guerin to reveal a facet of my secondary learning outcomes. I believe reflective journaling is primary evidence of my secondary learning.   If my students can produce high quality reflective journaling, they have achieved gold stars for secondary learning outcomes.
Since 2006, I have asked creative students taking a Media Writing module in the BSc degrees I teach to maintain a written journal. I ask them to write about specific themes as answers to questions. In fact, one of my repeat students is completing a journal during the summer of 2023 that provides the reader with a better understanding of creativity.
I need to continue making video clips about journaling on my YouTube channel and hope to extend this practice of creative journaling idea into other parts of my teaching practice with the Technological University of the Shannon. I previously archived some work inside a digital repository of the journaling work by using tools on the campus OneDrive. However, because our online assets have changed since we consolidated into a larger university, I will need to reposition some of the work to ensure it's accessible by students and staff.
Ever since I adopted a daily routine of reviewing content I've highlighted, I've realised the exceptional value of reflecting on past work. And I've also become a big fan of cover art, end notes, and epigraphs. Thanks to electronic tools such as Readwise and Instapaper, I can see relationships between things I've created as sketchnotes, highlights, blog posts, and personal monographs. I'm trying to become more efficient at this process because I've more than 300 Moleskines in my attic containing my notes.
As I created this short blog post on a sunny day in County Waterford, I hear John Heffernan typing away at 50 word per minute next to me. I've followed John to many learning spaces and our co-creative activity today reminds me of an event in Thurles, County Tipperary, where we saw this process before.
At that event, Pam Moran said, "Our lives are filled with spaces where we learn from the time we are born and forever after. Learning begins with sounds, smells, tastes, textures, images, and emotional responses. Learning connects us to the world and the world to us. We learn as individuals and with each other. We represent our learning through our stories, images that we capture, words on pages, work we accomplish, and the symbols we use to describe, qualify and quantify our universe. Our learning embodies the uniqueness of who we are; no two memories are alike. This space for learning creates an opportunity to connect and construct memories together that become internal documentaries of that which we choose to explore- to rewind, play, pause, or fast forward. We are all creators of learning moments. That’s what we do." 
The 2012 Learning Spaces Conference connected enthusiasts at all levels of Ireland's educational system. It provided educators with practical ways to integrate technology into their learning spaces. It provided teachers with the tools which will allow them to connect with learners of the 21st century, who are at ease with technology.
The 2023 EPALE platform is a learning space that connects passionate facilitators in the field of adult education. I'm glad to add it to my timeline of lessons learned in adult education.References about Secondary Learning
- Secondary Learning Outcomes as cited in Google Scholar.
- Bernie Goldbach keeps a Flickr photo album of his Moleskines.
- "In My Our learning spaces" with Pam Moran and Bernie Goldbach, April 16, 2012.
- "Graveside thinking about writing in the future tense", January 8, 2017
- "Writing for Social Layers", August 27, 2011
- Electronic Platform for Adult Education in Europe (EPALE)
by Bernie Goldbach in Clonmel.
Image from The Emotion Machine.
MOST OF MY WORK with Artificial Intelligence has been in the field of Machine Learning. I teach students how to improve their online profiles so they enhance their prospects of gaining good jobs. And last semester, I introduced generative AI into classrooms to increase the speed of creating responsive web sites.On several elite university campuses, there are societies such as Stanford's Club for Effective Altruism, that have funding from benefactors who want to examine ways to keep rogue AI at bay. I've gifted content from the Washington Post that explains these initiatives. Some of the deep thoughts and interest groups sound a bit cultish to me.
[Bernie Goldbach teaches digital transformation for the Technological University of the Shannon.]
How elite schools like Stanford became fixated on the AI apocalypse
by Nitasha Tiku
Extracted from The Washington Post, July 5, 2023
A billionaire-backed movement is recruiting college students to fight killer AI, which some see as the next Manhattan Project.
Paul Edwards, a Stanford University fellow who spent decades studying nuclear war and climate change, considers himself “an apocalypse guy.” So Edwards jumped at the chance in 2018 to help develop a freshman class on preventing human extinction.
Working with epidemiologist Steve Luby, a professor of medicine and infectious disease, the pair focused on three familiar threats to the species — global pandemics, extreme climate change and nuclear winter — along with a fourth, newer menace: advanced artificial intelligence.
On that last front, Edwards thought young people would be worried about immediate threats, like AI-powered surveillance, misinformation or autonomous weapons that target and kill without human intervention — problems he calls “ultraserious.” But he soon discovered that some students were more focused on a purely hypothetical risk: That AI could become as smart as humans and destroy mankind.
Science fiction has long contemplated rogue AI, from HAL 9000 to the Terminator’s Skynet. But in recent years, Silicon Valley has become enthralled by a distinct vision of how super-intelligence might go awry, derived from thought experiments at the fringes of tech culture. In these scenarios, AI isn’t necessarily sentient. Instead, it becomes fixated on a goal — even a mundane one, like making paper clips — and triggers human extinction to optimize its task.
To prevent this theoretical but cataclysmic outcome, mission-driven labs like DeepMind, OpenAI and Anthropic are racing to build a good kind of AI programmed not to lie, deceive or kill us. Meanwhile, donors such as Tesla CEO Elon Musk, disgraced FTX founder Sam Bankman-Fried, Skype founder Jaan Tallinn and ethereum co-founder Vitalik Buterin — as well as institutions like Open Philanthropy, a charitable organization started by billionaire Facebook co-founder Dustin Moskovitz — have worked to push doomsayers from the tech industry’s margins into the mainstream.
More recently, wealthy tech philanthropists have begun recruiting an army of elite college students to prioritize the fight against rogue AI over other threats. Open Philanthropy alone has funneled nearly half a billion dollars into developing a pipeline of talent to fight rogue AI, building a scaffolding of think tanks, YouTube channels, prize competitions, grants, research funding and scholarships — as well as a new fellowship that can pay student leaders as much as $80,000 a year, plus tens of thousands of dollars in expenses.
At Stanford, Open Philanthropy awarded Luby and Edwards more than $1.5 million in grants to launch the Stanford Existential Risk Initiative, which supports student research in the growing field known as “AI safety” or “AI alignment.” It also hosts an annual conference and sponsors a student group, one of dozens of AI safety clubs that Open Philanthropy has helped support in the past year at universities around the country.
Critics call the AI safety movement unscientific. They say its claims about existential risk can sound closer to a religion than research. And while the sci-fi narrative resonates with public fears about runaway AI, critics say it obsesses over one kind of catastrophe to the exclusion of many others.
“The conversation is just hijacked,” said Timnit Gebru, former co-lead of Ethical AI at Google.
Gebru and other AI ethicists say the movement has drawn attention away from existing harms — like racist algorithms that determine who gets a mortgage or AI models that scrape artists’s work without compensation — and drown out calls for remedies. Other skeptics, like venture capitalist Marc Andreessen, are AI boosters who say that hyping such fears will impede the technology’s progress.
Open Philanthropy spokesperson Mike Levine said harms like algorithmic racism deserve a robust response. But he said those problems stem from the same root issue: AI systems not behaving as their programmers intended. The theoretical risks “were not garnering sufficient attention from others — in part because these issues were perceived as speculative,” Levine said in a statement. He compared the nonprofit’s AI focus to its work on pandemics, which also was regarded as theoretical until the coronavirus emerged.
The foundation began prioritizing existential risks around AI in 2016, according to a blog post by co-chief executive Holden Karnofsky, a former hedge funder whose wife and brother-in-law co-founded the AI start-up Anthropic and previously worked at OpenAI. At the time, Karnofsky wrote, there was little status or money to be gained by focusing on risks. So the nonprofit set out to build a pipeline of young people who would filter into top companies and agitate for change from the inside.
Colleges have been key to this growth strategy, serving as both a pathway to prestige and a recruiting ground for idealistic talent. Over the past year and a half, AI safety groups have cropped up on about 20 campuses in the United States and Europe — including Harvard, Georgia Tech, MIT, Columbia and New York University — many led by students financed by university fellowships.
The clubs train students in machine learning and help them find jobs in AI start-ups or one of the many nonprofit groups dedicated to AI safety.
Many of these newly minted student leaders view rogue AI as an urgent and neglected threat, potentially rivaling climate change in its ability to end human life. Many see advanced AI as the Manhattan Project of their generation. Among them is Gabriel Mukobi, 23, who graduated from Stanford in June and is transitioning into a master’s program for computer science. Mukobi helped organize a campus AI safety group last summer and dreams of making Stanford a hub for AI safety work. Despite the school’s ties to Silicon Valley, Mukobi said it lags behind nearby UC Berkeley, where younger faculty members research AI alignment, the term for embedding human ethics into AI systems.
“This just seems like a really, really important thing,” Mukobi said, “and I want to make it happen.”
When Mukobi first heard the theory that AI could eradicate humanity, he found it hard to believe. At the time, Mukobi was a sophomore on a gap year during the pandemic. Back then, he was concerned about animal welfare, promoting meat alternatives and ending animal agriculture.
But then Mukobi joined Stanford’s club for effective altruism, known as EA, a philosophical movement that advocates doing maximum good by calculating the expected value of charitable acts, like protecting the future from runaway AI. By 2022, AI capabilities were advancing all around him — wild developments that made those warnings seem prescient.
Last summer, he announced the Stanford AI Alignment group (SAIA) in a blog post with a diagram of a tree representing his plan. He’d recruit a broad group of students (the soil) and then “funnel” the most promising candidates (the roots) up through the pipeline (the trunk). To guard against the “reputational hazards” of toiling in a field some consider sketchy, Mukobi wrote, “we’ll prioritize students and avoid targeted outreach to unaligned AI professors.”
Among the reputational hazards of the AI safety movement is its association with an array of controversial figures and ideas, like EA, which is also known for recruiting ambitious young people on elite college campuses.
EA’s drive toward maximizing good initially meant convincing top graduates in rich countries to go into high-paying jobs, rather than public service, and donate their wealth to causes like buying mosquito nets to save lives in malaria-racked countries in Africa.
But from the start EA was intertwined with tech subcultures interested in futurism and rationalist thought. Over time, global poverty slid down the cause list, while rogue AI climbed toward the top. Extreme practitioners began to promote an idea called “longtermism,” prioritizing the lives of people potentially millions of years in the future, who might be a digitized version of human beings, over present-day suffering.
In the past year, EA has been beset by scandal, including the fall of Bankman-Fried, one of its largest donors. Another key figure, Oxford philosopher Nick Bostrom, whose 2014 bestseller “Superintelligence” is essential reading in EA circles, met public uproar when a decades-old diatribe about IQ surfaced in January.
“Black are more stupid than whites,” Bostrom wrote, calling the statement “logically correct,” then using the n-word in a hypothetical example of how his words could be misinterpreted as racist. Bostrom apologized for the slur but little else.
After reading Bostrom’s diatribe, SAIA stopped giving away copies of “Superintelligence.” Mukobi, who identifies as biracial, called the message “sus” but saw it as Bostrom’s failure — not the movement’s.
Mukobi did not mention EA or longtermism when he sent an email to Stanford’s student listservs in September touting his group’s student-led seminar on AI safety, which counted for course credit. Programming future AI systems to share human values could mean “an amazing world free from diseases, poverty, and suffering,” while failure could unleash “human extinction or our permanent disempowerment,” Mukobi wrote, offering free boba tea to anyone who attended the 30-minute intro.
Students who join the AI safety community sometimes get more than free boba. Just as EA conferences once meant traveling the world and having one-on-one meetings with wealthy, influential donors, Open Philanthropy’s new university fellowship offers a hefty direct deposit: undergraduate leaders receive as much as $80,000 a year, plus $14,500 for health insurance, and up to $100,000 a year to cover group expenses.
The movement has successfully influenced AI culture through social structures built around swapping ideas, said Shazeda Ahmed, a postdoctoral research associate at Princeton University’s Center for Information Technology Policy. Student leaders have access to a glut of resources from donor-sponsored organizations, including an “AI Safety Fundamentals” curriculum developed by an OpenAI employee.
Interested students join reading groups where they get free copies of books like “The Precipice,” and may spend hours reading the latest alignment papers, posting career advice on the Effective Altruism forum, or adjusting their P(doom), a subjective estimate of the probability that advanced AI will end badly. The grants, travel, leadership roles for inexperienced graduates and sponsored co-working spaces build a close-knit community.
Edwards discovered that shared online forums function like a form of peer review, with authors changing their original text in response to the comments.
“It’s really readable writing, which is great,” Edwards said, but it bypasses the precision of vetting ideas through experts. “There’s a kind of alternate universe where the academic world is being cut out.”
Edwards’s first book was on the military origins of AI and he recently served on the United Nations’ chief climate panel, leaving him too rooted in real-world science and politics to entertain the kind of dorm-room musings accepted at face value in the forums.
Could AI take over all the computers necessary to end humanity? “Not happening,” Edwards said. “Too many humans in the loop. And there will be for 20 or 30 years.”
Since the launch of ChatGPT in November, discussion of AI safety has exploded at a dizzying pace. Corporate labs that view advanced artificial intelligence as inevitable and want the social benefits to outweigh the risks are increasingly touting AI safety as the antidote to the worst feared outcomes.
At Stanford, Mukobi has tried to capitalize on the sudden interest.
After Yoshua Bengio, one of the “godfathers” of deep learning, signed an open letter in March urging the AI industry to hit pause, Mukobi sent another email to Stanford student listservs warning that AI safety was being eclipsed by rapid advances in the field. “Everyone” is “starting to notice some of the consequences,” he wrote, linking each word to a recent op-ed, tweet, Substack post, article or YouTube video warning about the perils of unaligned AI.
By then, SAIA had already begun its second set of student discussions on introductory and intermediate AI alignment, which 100 students have completed so far.
“You don’t get safety by default, you have to build it in — and nobody even knows how to do this yet,” he wrote.
In conversation, Mukobi is patient and more measured than in his email solicitations, cracking the occasional self-deprecating joke. When told that some consider the movement cultish, he said he understood the concerns. (Some EA literature also embraces nonbelievers. “You’re right to be skeptical of these claims,” says the homepage for Global Challenges Project, which hosts three-day expenses-paid workshops for students to explore existential risk reduction.)
Mukobi feels energized about the growing consensus that these risks are worth exploring. He heard students talking about AI safety in the halls of Gates, the computer science building, in May after Geoffrey Hinton, another “godfather” of AI, quit Google to warn about AI. By the end of the year, Mukobi thinks the subject could be a dinner-table topic, just like climate change or the war in Ukraine.
Luby, Edwards’s teaching partner for the class on human extinction, also seems to find these arguments persuasive. He had already rearranged the order of his AI lesson plans to help students see the imminent risks from AI. No one needs to “drink the EA Kool-Aid” to have genuine concerns, he said.
Edwards, on the other hand, still sees things like climate change as a bigger threat than rogue AI. But ChatGPT and the rapid release of AI models has convinced him that there should be room to think about AI safety.
Interest in the topic is also growing among Stanford faculty members, Edwards said. He noted that a new postdoctoral fellow will lead a class on alignment next semester in Stanford’s storied computer science department.
The course will not be taught by students or outside experts. Instead, he said, it “will be a regular Stanford class.”
[Portions of this blog post were extracted via a subscription-funded gift link from the technology section of The Washington Post.]
IT'S TIME to reboot Irish Open Coffee sessions with creatives, this time with the halo effect of Grow Remote Ireland. Along with a team of animators, some working remotely with major studios, the helping hand of Grow Remote, and the support of the Questum Acceleration Centre, we will restart local Open Coffee sessions on the first Thursday of every month.
I remember the strong and enthusiastic cohort of start-ups that headed into Limerick city centre for Open Coffee months before the iPhone was launched.
Open Coffee is an open concept that we brought to Limerick one month after its origin in 2007 in London by Saul Klein, one of the founders of Skype. He wanted to encourage entrepreneurs, developers and investors to become acquainted with each other in informal gatherings so that the investment industry become more transparent. It worked. Meetings are still held worldwide every week at over 100 locations. The timing and content varies by city.
I learned a lot from young entrepreneurs during Limerick Open Coffee sessions--16 years ago. And I would travel to Cork for the same energy. When my academic commitments kept me chained to the classroom I watched the spirit of Open Coffee percolate into Fireside Chats arranged by Pat Carroll. The same sorts of activities now show up in my newsfeed as part of the StartUp Grind Events Listing.
I'm leaning into this version of a reboot of Open Coffee and I'm going to hashtag it as #GROC (Grow Remote Open Coffee). Done right, these first Thursday sessions will bring freshly brewed ideas along with locally baked pastries to a casual group that comes together to share ideas and learn how to leverage a tremendous array of resources for remote working employees and forward-thinking employers.
By Bernie Goldbach in Clonmel. Photo of his Rode WGII mic setup.
THE RODE Wireless Go II microphones delete old recordings automatically when you make new recordings and while that can be a time-saver, it can also lead to a problem that haunts me.When I finish a long day's recording session, I have to recharge the microphones. If I'm not careful, I can accidentally switch the mics on and then they start recording. This has happened to me twice and I didn't notice that the mics were recording for several hours while recharging. This meant the mics deleted recordings that I had not archived.
I need to review my workflow and extract content before recharging the mics. That's relatively easy to do when plugging the mics into the USB port of my Surface Book. However, if I've several recordings to extract, it does take time because the Rode software does the exporting. I cannot export easily using the File Manager inside Windows.
If you're a Rode Wireless Go II user with a work-around, please offer your suggestion.
[Bernie Goldbach teaches creative media for business on the Clonmel Digital Campus of the Technological University of the Shannon.]