EDITS.WS

Category: wptavern.com

  • WordPress 6.3 Beta 2 Released, Ready for Testing

    WordPress 6.3 hit a major milestone today with the release of Beta 2. The release leads opted to skip Beta 1, which was delayed yesterday after some technical issues with packaging the release, and have moved straight on to Beta 2.

    As WordPress 6.3 is set to be the last major release of the Gutenberg project’s Phase 2 focus on customization, it ties up many loose ends related to the Site Editor and usability in general. It rolls in the ten most recent releases of the Gutenberg plugin – versions 15.2 through 16.1.

    Major interface enhancements in this release, as outlined by the comprehensive 6.3 testing guide, include the following:

    Patterns are also getting a big boost in this release, as reusable blocks have been renamed to “synced patterns.” Pattern creation is now available to users and a new pattern library will be located inside the editor for saving and managing both synced and unsynced patterns. Theme authors now have the capability to register custom patterns to templates, so they appear in the start modal to speed up page building.

    WordPress 6.3 will introduce three new blocks, including details, time-to-read, and footnotes, along with many improvements to existing blocks.

    This release comes with significant performance updates, most notably the addition of defer and async support to the WP Scripts API and fetchpriority support for images. Support for PHP versions 8.0+ has been improved, along with block template resolution, image lazy loading, and the emoji loader.

    In the rare event that the manual update of a theme or plugin fails, auto-rollback is available as of WordPress 6.3.

    Beta 2 testers are encouraged to file bug reports on WordPress Trac. During beta testing until the last RC, the WordPress project will also be doubling its monetary reward for any new, unreleased security issues that are uncovered. The vulnerabilities must be found in new code in order to qualify for the doubled reward.

    Check out the Beta 2 release post for more information on new features, accessibility improvements, and instructions on how to test. WordPress 6.3 is scheduled for release on August 8, 2023.

  • WordCamp Asia 2024 Scheduled for March 7-9 in Taipei

    WordCamp Asia has announced its dates for 2024. The flagship event is now officially scheduled for March 7-9, in Taipei, Taiwan. Organizers have secured the Taipei International Convention Center (TICC) venue to host the event, which is located in the business district not far from Taipei 101, formerly known as the Taipei World Financial Center, a skyscraper that is the city’s most visible landmark. TICC has a capacity of more than 3,000 people.

    “The local community is massive and I’ve been told that WordCamp Taiwan (this October) alone would boast of at least 500 attendees,” organizer John Ang said after visiting Taipei with his team to sign the venue. “While we were on the same trip, we were lucky to be able to celebrate  the 20th Anniversary of WordPress with the Taiwanese community.

    “There’s also active work bringing in government support and other open source communities across the region (e.g. Hong Kong) to WordCamp Asia next year.” 

    photo credit: Preparations have started for WordCamp Asia 2024

    WordCamp Asia attendees can expect 3-5 tracks of sessions featuring diverse presentations across a range of topics for beginners and seasoned WordPress professionals alike. The venue also offers ample common areas for networking.

    More details on the event and calls for speakers and sponsors should be coming soon. Those who are hopeful to attend can subscribe to updates on the event’s website or follow @WordCampAsia on Twitter.

  • #81 – James Dominy on Why AI Is to Be Embraced, Not Feared

    Transcript

    [00:00:00] Nathan Wrigley: Welcome to the Jukebox podcast from WP Tavern. My name is Nathan Wrigley. Jukebox is a podcast, which is dedicated to all things WordPress. The people, the events, the plugins, the blocks, the themes, and in this case how AI and WordPress can work together.

    If you’d like to subscribe to the podcast, you can do that by searching for. WP Tavern in your podcast player of choice. Or by going to WPTavern.com forward slash feed forward slash podcast. And you can copy that URL into most podcast players.

    If you have a topic that you’d like us to feature on the podcast, I’m keen to hear from you. And hopefully get you or your idea featured on the show. Head to WPTavern.com forward slash contact forward slash jukebox, and use the form there.

    So on the podcast today, we have James Dominy. James is a computer scientist with a master’s degree in bioinformatics. He lives in Ireland working at the WP engine Limerick office.

    This is the second podcast recorded at WordCamp Europe, 2023 in Athens. James gave a talk at the event about the influence of AI on the WordPress community and how it’s going to disrupt so many of the roles which WordPressers currently occupy.

    We talk about the recent rise of ChatGPT, and the fact that it’s made AI available to almost anyone. In less than 12 months, many of us have gone from never touching AI technologies to using them on a daily basis to speed up some aspect of our work.

    The discussion moves on to the rate at which AI systems might evolve, and whether or not they’re truly intelligent or just a suite of technologies which masquerade is intelligent. Are they merely good at predicting the next word or phrase in any given sentence? Is there a scenario in which we can expect our machines to stop simply regurgitating texts and images based upon what they’ve consumed; a future in which they can set their own agendas and learn based upon their own goals?

    This gets into the subject of whether or not AI is in any meaningful way innately intelligent, or just good at making us think that it is, and whether or not the famous Turing test is a worthwhile measure of the abilities of an AI.

    James’ his background in biochemistry comes in handy as we turn our attention to whether or not there’s something unique about the brains that we all possess. Or if intelligence is merely a matter of the amount of compute power that an AI can consume. It’s more or less certain that given time machines will be more capable than they are now. So when if ever does the intelligence Rubicon get crossed?

    The current AI systems can be broadly classified as Large Language Models or LLMs for short, and James explains what these are and how they work. How can they create a sentence word by word if they don’t have an understanding of where each sentence is going to end up?

    James explains that LLMs are a little more complex than just handling one word at a time, always moving backwards and forwards within their predictions to ensure that they’re creating content which makes sense, even if it’s not always factually accurate.

    We then move on from the conceptual understanding of AI to more concrete ways it can be implemented. What ways can WordPress users implement AI right now? And what innovations might we reasonably expect to be available in the future? Will we be able to get AI to make intelligent decisions about our websites SEO or design, and therefore be able to focus our time on other more pressing matters?

    It’s a fascinating conversation, whether or not you’ve used AI tools in the past.

    If you’re interested in finding out more, you can find all the links in the show notes by heading to WPTavern.com forward slash podcast. Where you’ll find all the other episodes as well.

    And so without further delay, I bring you James Dominy.

    I am joined on the podcast today by James Dominy. How are you doing James?

    [00:04:51] James Dominy: I’m well, thanks. Hi Nathan. How are you doing?

    [00:04:53] Nathan Wrigley: Yeah, good, thanks. We’re at WordCamp Europe. We’re upstairs somewhere. I’m not entirely sure where we are in all honesty. The principle idea of today’s conversation with James is he’s done a presentation at WordCamp Europe all about AI. Now, I literally can’t think of a topic which is getting more interest at the moment. It seems the general press is talking about AI all the time.

    [00:05:17] James Dominy: Yeah.

    [00:05:17] Nathan Wrigley: It’s consuming absolutely everything. So it’s the perfect time to have this conversation. What was your talk about today? What did you actually talk about in front of those people?

    [00:05:24] James Dominy: Right. So my talk was about the influence of AI on the WordPress community. The WordPress community involving, in my mind, roughly three groups. You’ve got your freelancer, single content generator, blogger. You have someone who does the same job but in a business as in an agency or a marketing or a brand context. And then on the other side, you’ve got software developers who are developing plugins or working on the actual WordPress Core.

    And AI is going to be changing the way all of those people work. Mostly I focused on the first and the third groups. I don’t know enough about the business aspects to really talk about the agency and the marketing side of things.

    I personally, I’m a software developer, so I suppose I really skewed towards that in the end. But, my wife has been a WordPresser for 15, 20 years, which is how I ended up doing this. And a lot of the things that she’s been using ChatGPT quite actively recently.

    And she’s been chatting to me after work going, you know, I was trying to use ChatGPT to do X Y Z. And I thought, well, you know, that’s interesting. I know some bit about machine learning and the way these things work. I’ve read some stuff on the internals and I have opinions.

    [00:06:33] Nathan Wrigley: Perfect.

    [00:06:34] James Dominy: So that’s how I got here.

    [00:06:35] Nathan Wrigley: Yeah. Well, that’s perfect. Thank you. It seems like at the moment the word ChatGPT could be easily interchanged with AI . Everybody is using that as the pseudonym for AI and it’s not really, is it? It really is a much bigger subject. But that is, it feels at the moment, the most useful implementation in the WordPress space. You know, you lock it into the block editor in some way shape and you create some content in that way.

    [00:07:00] James Dominy: And I mean, I am absolutely guilty of that. I think the number of times I’ve said ChatGPT, I mean AI generative systems, or something during my workshop this morning is well beyond count.

    it is likely to fall victim of a trademark thing at some point. Like Google desperately tries to claim that Google is a trademark and shouldn’t be used as a generic term for search. I expect the same thing will happen with ChatGPT at some point.

    [00:07:25] Nathan Wrigley: This is going to sound a little bit, well, maybe snarky is the wrong word, but I hope you don’t take it this way, but it feels to me that the pace of change in AI is so remarkably rapid. I mean, like nothing I can think of. So, is there a way that we can even know what AI could look like in a year’s time, two years’ time, five years’ time? So in other words, if we speculate on what it could be to WordPress, is that a serious enterprise? Is it serious endeavor? Or are we just hoping that we get the right guess? Because I don’t know what it’s going to be like.

    [00:07:59] James Dominy: I think if we rephrase the question a bit, we might get a better answer. So AIs are human design systems. And there is a thing called the alignment problem where there is an element of design to AIs, and we give it a direction, but it doesn’t always go the direction we want and I think that is the unanswerable part of this question.

    Yes, there are going to be emergent surprises from the capabilities of AIs. But for the most part, AIs are developed with a specific goal in mind. Large language models were developed, okay I’m taking a wild educated guess here perhaps, but they were developed with the idea of producing text that sounded like a human. And I mean, we’ve had the Turing test for nearly a hundred years, more than a hundred years? 21, yeah, more than a hundred years now.

    So I mean, that’s been a goal for a hundred years. Everyone says that AI has advanced rapidly and it has, but the core mathematical principles that are involved, those haven’t advanced. I don’t want to take away from the people who’ve done the work here. There has been work that’s been put into it, but I think what’s really given us the quantum leap here is the amount of computational power that we can throw at the problem.

    And as long as that is increasing exponentially, I think we can expect that the models themselves will get exponentially better at roughly the same rate as the amount of hardware we throw at it.

    [00:09:28] Nathan Wrigley: So we can stare into the future and imagine that it’s going to get exponentially, logarithmically it’s going to, it’s just going to get better and better and better. But we can’t predict the ways that it might output that betterness. Who knows what kind of interface there’ll be, or.

    [00:09:41] James Dominy: Yeah. I think better’s a very evasive term perhaps, on my part. I think there are specific ways that it is going to get better. For example, we are going to see less confused AIs, because they are able to process more tokens. They have deeper models. Deeper statistical trees for outputs. They’re able to take more context in and apply it to whatever comes out. So in that sense we’re going to see a better output from an AI. Is it going to ever be able to innovate? Ooh, that’s a deep philosophical question, and I mean we can get into that, but I don’t know that we have time.

    [00:10:20] Nathan Wrigley: I think I would like to get into that.

    [00:10:22] James Dominy: Okay.

    [00:10:22] Nathan Wrigley: Because when we begin talking about AI, I think the word which sticks is intelligence. The artificial bit gets quickly forgotten and we imagine that there is some kind of intelligence behind this, because we ask it a fairly straightforward, or even indeed quite complicated question.

    And we get something which appears to pass the Turing test. Just for those people who are listening, the Turing test is a fairly blunt measure of whether you are talking to something which is a human or not human, masquerading as a human. And if something is deemed to have passed the Turing test, it’s indistinguishable from a human.

    And so, I have an intuition that really what we’re getting back, it’s not intelligent in any meaningful sense of the word. It’s kind of like a regurgitation machine. It’s sucking in information and then it’s just giving us a best approximation of what it thinks we want to hear. But it’s not truly intelligent. If you asked it something utterly tangential, that it had no capacity, it had no data storage on, it would be unable to cope with that, right?

    [00:11:22] James Dominy: I think yes. If you can clearly delineate the idea of, we have no data on this, which is very difficult considering the amounts of information that, you know, give something access to Wikipedia and that AI generative system might well be able to produce an opinion on practically anything these days.

    But if it hasn’t read the latest paper on advanced quantum mechanic theory, it’s not going to know it. That text isn’t going to be there. Could it reproduce that paper? That’s a subtely different question, because then it comes down to, well, when a human produces that paper, what are they really doing?

    They’re synthesizing their knowledge from a bunch of different things that they’ve learned, and they’re producing text in a language, in a grammar, that they have learned in a very similar way, that statistically speaking this sentence follows this grammatical form. Because I have learned that as a child through hearing it several thousand times from the people around me and my parents. What’s different?

    A more practical example here, I was having this discussion earlier today, and someone said yes, but they’re not truly intelligent. But if you consider it, even now, we can ask Chat GPT something, and I’m going to be abstract cause I don’t have a concrete example here, I’m sorry. But we can say to ChatGPT, I want you to produce a poem in the style of Shakespeare, a sonnet or something. But I want you to use a plot from Goethe.

    Okay, fine. Now it can do that. It can give you a response. I’m not sure that it’ll be a good response. I haven’t tried that particular one. But in that context, if you are asking a human to do that, and we automatically make the assumption of other human beings that they understand. And, sorry, I’m making air quotes here. That they understand, in quotes, who Goethe is. That that is a person and a character. That Goethe has a particular style and a proclivity for a certain pattern in his plots.

    And that those are all, to use a computer science term, symbolic representations. Abstract concepts. So is ChatGPT actually understanding those abstract concepts? Does it understand that Goethe is a person? Educated guests here, probably not. But it does understand that Goethe refers to a certain, can draw a line in all the stuff that it has learned and know this is Goethe.

    It has a concept of what it thinks Goethe is. Then from there it can say, and he has done work on the following things, and these are plots. And so it kind of understands. There’s another line there about what a plot is, which is a very abstract concept.

    Does that mean it’s intelligent? Does that mean it understands? I don’t know. That’s my answer because I did biochemistry at university, and there’s also the question there, and it’s exactly the same question. It’s at what point do the biological machines, the biochemical machines, your actual proteins and things that are obviously on their own, unintelligent, and yet when they act in concerts, they produce a cell, and a living being.

    Where does that boundary exist? Is it gray? Is it a hard line? And the same for me is true of the intelligence question here. Intelligence is a, it’s an aglomeration of lots of small, well-defined things that when they start interacting, become more than the sum of their parts. Does it come down to the Turing test? I mean, the fact that people on support, little support popups on the web, have to ask, are you a human every now and then. It immediately says, we have AIs that have passed the Turing test long ago.

    But here in this case, like the extended Turing test is the thing actually intelligent? I don’t know. I genuinely don’t know the answer there. In some sense, yes, because it’s doing almost the same thing as we are, just in a different, with different delineations and different abstractions, but the process is probably the same.

    [00:15:33] Nathan Wrigley: Given that you’ve got a background in, forgive me, did you say biochemistry?

    [00:15:37] James Dominy: Yeah, biochemistry and computer science, bioinfomatics.

    [00:15:39] Nathan Wrigley: Yeah, do you have an intuition as to whether the substrate of the brain has some unique capacity that can lock intelligence into it? In other words, is there a point at which a computer cannot leap the hurdle? There’s something special about the brain, the way the brain is created? This piece of wetwear in our head.

    [00:16:00] James Dominy: Unpopular opinion, I think it comes down to brute force count. We’ve got trillions of cells. Large language models, I don’t know what the numbers are for GPT4, but we’re not at trillions yet. Maybe when we get there, I don’t know where the tipping point is, you know. Maybe when we get to tens of billions, or whatever number it happens to be, is the point where this thing actually becomes intelligent.

    And we would be unable to distinguish them from a human, other than the fact that we’re looking at a screen that, that we know it’s running on the chip in front of us. But if it’s over the internet and it’s on a machine running, or whether we’re talking to a person in the support center. Or we are at the McDonald’s kiosk of 2050 and being asked whether we want fries with that. If we can’t see the person who’s asking the question, if we’re at the drive-through, we can’t see the person. Do we care?

    [00:16:54] Nathan Wrigley: Interesting. You mentioned a couple of times large language models, often abbreviated just to LLM. My understanding at least, forgive me I’m, I really genuinely am no expert about this. This is the underpinning of how it works. I’m going to explain it in crude terms, and then I’m hoping you’ll step in and pad it out and make it more accurate.

    [00:17:12] James Dominy: I should caveat anything that I say here with I also am not an expert on these, but I will do what I can.

    [00:17:17] Nathan Wrigley: So a large language model, my understanding is that things like ChatGPT are built on top of this, and essentially it is vacuuming up the internet. Text, images, whatever data you can throw at it. And it’s consuming that, storing that. And then at the point where you ask it something, so write a sonnet in the style of Goethe, written by Shakespeare. It’s then making a best approximation, and it’s going through a process of, okay, what should the first word be? Right, we’ve decided on that. Now, let’s figure out the second word, and the third word and the fourth word. Until finally it ends in a full stop and it’s done.

    And that’s the process it’s going through. Which seems highly unintelligent. But then again, that’s what I’m doing now. I’m probably selecting in some way what the next word is and what the next word is. But yeah, explain to us how these large language models work.

    [00:18:03] James Dominy: I think that’s a pretty fair summation. I think the important bit that needs to be filled in there is that what we perceive and use as customers of AI systems in general is a layer of several different models. There is a lot of pre-processing that goes into our prompts and post-processing in terms of what comes out.

    But fundamentally the large language model is, yes, it’s strings of text generally. There are different systems that the AI images, image systems, are a different form of maths. Most of them, at least the ones that I know of, are mostly based on something called Stable Diffusion.

    We can chat about that separately, but large language models tend to be trained on a large pile of text where they develop statistical inferences for the likelihood of some sequence of words following some other sequence of words. So as you say, like, if I know that a pile of words were written by Goethe, then I can sub select that aspect of my trained data.

    And I’m personifying an AI here already. The AI can circumscribe, isolate a portion of its training set, and say, okay I will use this subset of my training, and use the statistical values for what words follow what other words that Goethe wrote. And then you will get something in the style of Goethe out.

    [00:19:29] Nathan Wrigley: It’s kind of astonishing that that works at all. That one word follows another in something which comes out as a sentence because, I don’t know if you’ve ever tried that experiment on your phone where you begin the predictive text. On my phone there’s there’s usually three words above the little typewriter, and it tries to say what the next word is based upon the previous word.

    [00:19:49] James Dominy: It’s not called auto corrupt for nothing.

    [00:19:50] Nathan Wrigley: Yeah, so you just click them at the end of that process, you have fantastic gibberish. It’s usually quite entertaining, and yet this system is able to, in some way just hijack that whole process and make it so that by the end the whole thing makes sense in isolation.

    It is Goethe. It looks like Shakespeare, sounds like Shakespeare, could easily be Shakespeare. How is it predicting into the future such that by the end, the whole thing makes sense? Is there more processing going on than, okay, just the next word. Is it reading backwards?

    [00:20:22] James Dominy: Yes absolutely. Again, not an expert on LLMs, but there is this thing called a Markov Model. Which is a much more linear chain. It’s used often for bioinformatics, for genome and predicting the most likely next amino acid or nucleic acid in a genomic or a proteomic sequence.

    And so Markov Models are very simple. They have a depth and that is how much history they remember of what they’ve seen. So you point a Markov Model at the beginning of the sequence of letters of nucleic, the ACGT’s. And then you want to say, okay, I’ve managed to sequence this off my organism. I’ve got a hundred bases and I want to know what the most likely one after that is, because that’s where it got cut off.

    You give it a hundred, maybe you have a buffer of 10. So it remembers the last ten. It sort of slides this window of visibility over the whole sequence and mathematically starts working out, you know, what comes after an A? Okay, 30% of the time it’s a C. 50% of the time it’s a G. And by the end of it, it can with reasonable accuracy to some value of how much information you’ve given it, predict okay, in this particular portion of 10 that I’ve seen, the next one should be T.

    And they get better as you give them more and more information. As you give them a bigger and bigger window. As you let them consume more and more memory whilst they’re doing their job, their accuracy increases.

    I imagine the same is true of large language models, because they do. They don’t just predict the next word, they operate on phrases, on whole sentences. At some point, maybe they already do, but I imagine they operate on whole paragraphs. And again, it depends on what you’re trying to produce. Like if you’re trying to produce a legal contract that’s got a fairly prescribed grammar and form to it. And you know, then like statistically you’re going to produce the same paragraph over and over again because you want the same effect out of contracts you do all the time.

    [00:22:22] Nathan Wrigley: You described this slider. That really got to the nub of it. I genuinely didn’t realize that it wasn’t doing any more than just predicting the next word. And because that’s the way I thought about it, I thought it was literally astonishing that it could throw together a sentence based upon just the next word, if it didn’t know what two words previously it had written.

    It’s back to my predictive text, which produces pure gobbledygook. But it still, occasionally, it goes down a blind alley, doesn’t it? Because although that is, presumably 99 times out of a hundred that will lead to a cogent sentence, which is readable. Occasionally it does this thing, which I think has got the name hallucinate, where it just gets slightly derailed and goes off in a different direction. And so produces something which is, I don’t know, inaccurate, just nonsense.

    [00:23:06] James Dominy: Yes. Well known for being confidently wrong for sure. I’ve experienced something similar, and I find that it is especially the case where you switch contexts. Like when you are asking it to do more than one thing at a time, and you make a change to the first thing that you expect to carry over into the context of the second task, and it just doesn’t. It gets confused.

    And then the two things, this is especially true in coding, where you ask it to produce one piece of code and a function here, and another piece of code and a function on the other side. And you expect them, those two functions to interoperate correctly. Which means that you have to get the convention, the interface between those two things, the same on both sides.

    But if you say, actually, I want this to be called Bob, that doesn’t necessarily translate. Again, I suppose this is my intuition. There are a lot of ways that that failure can happen. The most obvious one is that you’re doing too much and it’s run out of tokens.

    Tokens are sort of an abstraction. Sorry I used that word a lot. Computer scientist. Tokens are, they’re not strictly speaking individual words, but they are a rough approximation of a unit of knowledge, context. I don’t know what the right word here. They chose token, right? So, if you use the API for ChatGPT, one of the things that you pass is how many tokens is the call allowed to use?

    Because you are charged by tokens. And if you say only 30 tokens, you get worse answers than if you give it an allowance of a hundred tokens. Meaning that you might have given it a problem that exceeds the window that I was describing earlier. That sort of backtrack of context that it’s allowed to use.

    Or you give it to two contexts and together they just go over and then it’s confused because it doesn’t know which, again, I say this as a semi-educated guess. We as humans don’t have a good definition of what context means in this conversation. How do we expect a computer system to?

    [00:25:05] Nathan Wrigley: Just as you’ve been talking, in my head, I’ve come up with this analogy of what I now think AI represents to me, and it represents essentially a very, very clever baby. There’s this child crawling around on the ground, I really do mean an infant who you fully forgive for knocking everything over and, tipping things over, damaging things and what have you. And yet this child can speak. So on the one hand, it can talk to you, but it’s just making utterly horrific mistakes because it’s a baby and you forgive it for that. So I don’t know how that sits, but that’s what’s it landed in my head.

    [00:25:40] James Dominy: I wouldn’t say that AI is in its infancy anymore, but it’s probably in its toddler year, and maybe we need to watch out when it turns two.

    [00:25:47] Nathan Wrigley: So we’ve, done the sort of high level what is AI and all of that. That’s fascinating. But given that this is a WordPress event and it’s a WordPress podcast, let’s bind some of this stuff to the product itself. So WordPress largely is a content creation platform. You open it up, you make a post, you make a page, and typically into that goes text, sometimes images, sometimes video, possibly some other file formats. But let’s stick with the model of text and images. Why do we want, or how could we put AI into WordPress? What are the things that might be desirable in a WordPress site that AI could assist us with?

    [00:26:21] James Dominy: I am totally going to be stealing some ideas from the AI content creation things that have happened this morning. I mean, there’s the obvious answer. I need to generate a thousand words for my editor by 4:00 PM today. Hey, ChatGPT, can you generate a thousand words on topic, blah?

    I think there are a lot of other places. I’d be super surprised if this hasn’t actually happened already. But, hey ChatGPT, write me an article that gets me to the top five Google ranking.

    The other obvious place for me as a software developer is using it to develop code. Humans are inventive. We’re going to see a lot of uses for AI that we never thought of. That’s not a bad thing at all. The more ways that we can use AI, I think the better.

    Yes, there are questions about the dangers, and I’m sure that’s a question coming up later on, so I won’t dive into them now, but in the WordPress community, there’s content creation, but there’s also content moderation, where AI can probably help a lot. Analyze this piece of text to me and tell me is it spam? Does it contain harmful or hateful content?

    Again, it’s a case of you get what you give. There’s that story about Microsoft, I think it was Microsoft, and the chatbot that turned into a horrible Nazi racist within about two hours, having been trained on Twitter data. We need to be careful about that, certainly. I’m struggling to think of things beyond the obvious.

    [00:27:47] Nathan Wrigley: Well, I think probably it is going to be the obvious, isn’t it? Largely, people are popping in text and so having something which will allow you within the interface, whether you are in a page builder or whether you’re using the Gutenberg editor, the ability to interrupt that flow and say, okay, I’ve written enough now, ChatGPT, take over. Give me the next 300 words please. Or just read what I’ve written and can you just finish this? I’m almost there.

    [00:28:11] James Dominy: Yeah, we are doing it already, even if it’s a sort of fairly primitive flow now where we write some stuff in our block editor, copy it up, pop it in ChatGPT or Bard or whatever, and say, hey, this is too formal. Or this is not formal enough. And it’s really great at that. Make this sound more businessy. And it understands the word businessy. The tool integration, it’s obvious in a lot of ways, but I think there are going to be a lot of non-obvious integrations. Like, oh wow, I wish I thought of that, and, you know, made my millions off that product. I mean, Jetpack is doing it already, you know. I am able to actively engage with ChatGPT whilst I’m editing my blog post. Fantastic.

    Another thing that I’ve just thought of is oh, I run a WooCommerce site and I want to use, not necessarily ChatGPT, but some other AI system to analyze product sales and use that to promote, to change the listing on my product site, so that I can sell more product. That’s going to happen.

    [00:29:09] Nathan Wrigley: Yeah, given that it’s incredibly good at consuming data.

    [00:29:13] James Dominy: Yeah, or even generating it on the fly. Generate 300 different descriptions of this product and randomize them. Put them out there and see which one sells best. We are doing that manually already. It’s AB testing at a larger scale.

    [00:29:28] Nathan Wrigley: Yeah. You can imagine a situation where the AI runs the split test, but it’s divided over 300 variations. And it decides for itself which is the winner.

    [00:29:39] James Dominy: On a day-to-day basis.

    [00:29:40] Nathan Wrigley: On an hourly basis. Implements the winner and then begins the whole process over and over again. I also wonder if in WordPress there is going to be AI to help lay out things. So at the moment we have the block editor. It enables you to create fairly complex layouts. We also have page builders, which allow us to do the same thing. So it alludes to what I was speaking about a moment ago.

    Talking, so literally talking, as well as typing in. I would like a homepage. I would like that homepage to show off my plumbing business, and here’s my telephone number. I’d like to have a picture of me, or somebody doing some plumbing, some additional content down there. You get the picture?

    [00:30:17] James Dominy: Yeah, absolutely.

    [00:30:18] Nathan Wrigley: A few little prompts, and rather than spitting out text or an image, whole layouts come out. And we can pick from 300 different layouts. I’ll go for that one, but now make the buttons red. The AI takes over the design process in a way.

    [00:30:32] James Dominy: Yeah. I’m going to confess here that I’m absolutely stealing this opinion from the AI panel earlier. I think the danger for WordPress specifically there, is that that level of automation for us with human engagement and, you know, developing something through conversation with an AI, might actually skip WordPress entirely. Why must the AI choose WordPress to do this?

    Maybe if we as a WordPress community invest in making WordPress AI integrated, then yeah, absolutely. Then hopefully we’re first to market with that in a way. And then it will generate stuff in WordPress. But there’s no, there’s no reason for it to maybe choose a Wix page as a better solution for you as a plumber, who doesn’t update things very often. You just want a static, you know.

    Chances are it’ll just say, here is some HTML it does the job for you, it’s pretty. I made some images for you as well. And, all you need to do is run the sequence of commands to, SSH it up to provider of your choice. Or I have selected this provider because I know how much they all charge and this is the cheapest. Or you’ve asked for the fastest, whatever.

    [00:31:41] Nathan Wrigley: Oh, interesting, okay. So it’s not just bound inside the WordPress interface. Literally, put this in the cheapest place as of today. And then if it changes in the next 24 hours, just move it over there and change the DNS for me and.

    [00:31:53] James Dominy: One day. For sure. Yeah.

    [00:31:54] Nathan Wrigley: Okay. So that very nicely ties into the harms.

    [00:31:58] James Dominy: There it is.

    [00:31:58] Nathan Wrigley: What we’ve just laid out is potentially quite harmful to a lot of the jobs that people do inside of WordPress. We’ve just described a workflow in which many of the things that we would charge clients for, which we could potentially get AI to do. Whether that’s a voice interface or a visual interface or a type, we’re typing in.

    So that is concerning, if we are giving AI the option to put us out of work. And I know at the moment, this is the hot topic. I’m pretty sure that there’s some fairly large organizations who have begun this process already. They’ve taken some staff who are doing jobs which can be swapped out for AI, and they’ve shed those staff.

    And whilst we’re in the beginning phase of that, it seems like we can swallow so much of people getting laid off. The problem, potentially is, if we keep laying people off over and over and over again and we give everything over to the AI, we suddenly are in a position where, well, there’s no humans in this whole process anymore. Does any of that give you pause for thought?

    [00:32:53] James Dominy: Yeah, it certainly does. I think we should temper our expectations of the capabilities of AI. So there’s a technical term called a terminal goal. The delineation between specific artificial intelligences and machine learning, in that world, and the concept of the general artificial intelligences, which is what everyone thinks of when they think of the I in artificial intelligence, is an AI that is capable of forming its own terminal goals.

    Its own, don’t get me wrong, like we have AIs that are capable of forming what are called intermediate nodes. If you tell an AI of a particular type to go and do a particular thing, then it is capable of forming intermediate steps. In order to do the thing you’ve told me, I need to first do this, which requires me to do that. And, you know, it forms a chain of goals, but none of those goals are emergent from the AI. They are towards a goal we have given the AI externally.

    That ability to form a goal internally is the concept of a terminal goal. And we don’t have, large language models don’t have terminal goals. Large language models, stable diffusion, all of the different algorithms that are hot topics today, are all couched within the idea of solving a problem given to them as an input.

    Which means there’s always going to need to be a human. At least with what we’ve got now. No matter how good these models get, how much brain power we give them. And this maybe is going against what I said earlier of like, I think it’s probably a quantity thing.

    Maybe there’s a tipping point. Maybe there’s a tipping point where the intermediate goal that it forms is indistinguishable from a terminal goal in a human brain. But for the moment, I think there always needs to be a human there to give the AI the task to solve. Open AI isn’t just running servers randomly just doing stuff. It spends its computational time answering users prompts and questions.

    [00:34:48] Nathan Wrigley: So if we pursue artificial intelligence research, and the end goal is to create an AGI, then presumably at some point we’ve got something which is indistinguishable from a human because it can set its own goals.

    [00:35:02] James Dominy: The cyberpunk dystopia, right?

    [00:35:03] Nathan Wrigley: But we’re not there yet. This is a ways off, my understanding at least anyway. But in the more short term, let’s bind it to the loss of jobs.

    [00:35:11] James Dominy: In my workshop this morning, I think the primary point that I wanted to get across is, if you are currently in the WordPress community, employed and or making an income out of WordPress. ChatGPT, Bard, generative AI, large language models are a tool that you should be learning to use. They’re not going to replace you.

    Maybe that’s less true on the content generation side, because large language models are particularly good at that. But there’s a flip side to that because on the software development side, programming languages have very strict grammars, which means the statistical model is particularly good at producing output for programming languages.

    It’s not good at handling the large amounts of complexity that can exist in large pieces of code. But equally so, I mean, if you ask it to give you a hundred items of things to do in Athens, whilst I’m totally, totally, working hard at a conference, uh, then you are probably going to get repeats. You might run into the confusion problem, the hallucination issue at some point there, where just a hundred is too much.

    Nobody has ever written an article of a hundred things to do in Athens in a day. I don’t know, I haven’t tried that. I’m guessing that there are going to be limitations. So some jobs are more in threat than others, but I think that if you’re already in the industry, or in the community and working with it, go with it and, absorb the tools into your day-to-day flow.

    It’s going to make you better at what you do. Faster at what you do. Hopefully able to make more money. Hopefully able to communicate with more people, translations et cetera. Make your blog multilingual. There are a lot of things that you can use it for that aren’t immediately coming after your job.

    The problem for me, and this again is the point that I was trying to get across in the workshop, the problem is the next generation. The people who are getting into WordPress today and tomorrow, and in six months time. Who are coming into a world where AI is already in such usage that it’s solving the simple problems. And the same as true, my editor wants 200 words or whatever on fun things to do in Athens overnight.

    Okay, great. ChatGPT can do that for the editor. Why does he need a junior content writer anymore? But the problem is, I mean, we’ve already said, sometimes it’s spectacularly wrong. Does that editor always have the time to actually vet the output? Probably not. And so the job of that junior is going to transform into, they need to be a subeditor. They need to be a content moderator almost, rather than a content generator.

    But that’s a skill that only comes from having written the content yourself. We learn by making mistakes, and if we are not making those mistakes because AI is generating the stuff, and either not making mistakes or making mistakes that we haven’t made before ourselves, and thus don’t recognize his mistakes. So my fear of the job losses aspect of AI is not that it’s going to wipe out people who are working already. It’s going to make that barrier to entry for the next generation, it’s knocking the bottom rung out of the ladder.

    And unless we change the ways that we teach people as they are entering the community, the WordPress community, the industry, and all the industries which AI is going to affect, the basics, and we focus on it. You know, it’s a catch 22. We have to teach people to do stuff without AI, so they can learn the basics. But at the same time, they also have to learn how to use AI so they can do the basics in the modern world.

    And I mean, we get back to that old debate like, why am I learning trigonometry in school? Because maybe someday it actually helps you do your job. Admittedly, so far, not so much. But I will say this. History, I did history in school. That has surprisingly turned out to be one of the most useful subjects I ever did, just because it taught me how to write. Which I didn’t learn in English class. Go figure.

    [00:39:17] Nathan Wrigley: It sounds like you are quite sanguine for now. If you are in the space and listening to this podcast now, everything is fine right now.

    [00:39:26] James Dominy: Yeah.

    [00:39:27] Nathan Wrigley: Maybe less sanguine for the future. Given that, do you think that AI more broadly needs to be corralled. There need to be guardrails put in place. There needs to be legislation. I don’t know how any of that works, but manufacturers of AI being put under the auspices of, well it would have to be governments, I guess. But some kind of system of checks and balances to make sure that it’s not, I don’t know, deliberately producing fakes. Or that the fakes are getting, the hallucinations are getting minimized. That it’s not doing things that aren’t in humanity’s best interests.

    [00:39:59] James Dominy: Absolutely. Yes. Although I’m not sure how we could do a good job of it, to be fair. The whole concept of, we want AIs to operate in humanity’s best interests. Who decides? The alignment problem crops up here where, it’s well known that we can train an AI to do something we think that it’s going to do, and it seems to be doing that thing until suddenly it doesn’t.

    And we just get some weird output. And then when we go digging, we realize actually it was trying to solve an entirely different problem to what we thought we were training it on, that just happened to have a huge amount of overlap with the thing that we did. But when we get to those edge cases, it goes off in what we think is a wildly wrong direction. But it is solving the problem that it was trained to solve. We just didn’t know we were training it to solve that problem.

    As far as regulation goes. Yes, I think regulation, it’s coming. I really want to say nobody could be stupid enough to put weapons in the hands of an AI. The human race has proved me wrong several thousand times already in history. Yeesh, I personally think that that’s an incredibly stupid idea. But then the problem becomes what’s a weapon?

    Because a weapon these days can be something as subtle as enough ability to control trading, high frequency trading. Accidentally crash a stock market. It’s already happened. Accidentally, and again, I’m air quoting the accidentally here, accidentally crash your competitor’s stock, or another nation’s stock market. AI is there, is being used as a validly useful tool to participate in the economy, but the economy can be used as a weapon.

    Putting AI in control of the water infrastructure in arid countries. Optimization, it can do those jobs a lot better. It can see almost instantaneously when there’s a pressure drop. So there’s a leak in this section of the pipe. Somebody needs to go fix it. And also it can just shut off the water to an entire section of the city because, I don’t know, it feels like it. Because for some reason it is optimizing for a different goal than we actually think we gave it.

    The trick is we can say, we can input into ChatGPT, I want you to provide water to the entire city in a fair and equitable way. That doesn’t mean that’s what it’s going to do. We just think that that’s what it’s going to do. We hope.

    [00:42:26] Nathan Wrigley: I think we kind of come back to where we started. If we had a crystal ball, and we could stare five, two years, three years, 10 years into the future. That feels like it would be a really great thing to have at the moment. There’s obviously going to be benefits. It’s going to make work certainly more productive. It’s going to make us be able to produce more things. But as you’ve just talked over the last 20 minutes or so, there’s also points of concern and things to be ironed out in the near term.

    [00:42:52] James Dominy: Absolutely, yeah.

    [00:42:53] Nathan Wrigley: We’re fast running out of time, so I think we’ll wrap it up if that’s all right? A quick one James, if somebody is interested, you’ve planted the seed of interest about AI and they want to get in touch with you and natter about this some more, where would they do that?

    [00:43:06] James Dominy: The best way is probably email. I am not a social person in the social media sense. I don’t have Twitter. I don’t do any of that. So I’m probably terrible for this when I think about it. My email is, J for Juliet, G for golf, my surname D O M for mother, I, N for November, Y for yankee at gmail.com. Please don’t spam. Please don’t get AI to spam me.

    [00:43:30] Nathan Wrigley: Yeah, yeah. James Dominy, thank you so much for joining us today.

    [00:43:34] James Dominy: Thank you for the opportunity. It’s been great fun, and I’ve really enjoyed being able to kind of deep dive into a lot of the stuff I just had to gloss over in the workshop. Thank you.

    On the podcast today we have James Dominy.

    James is a computer scientist with a masters degree in bioinformatics. He lives in Ireland, working at the WPEngine Limerick office.

    This is the second podcast recorded at WordCamp Europe 2023 in Athens. James gave a talk at the event about the influence of AI on the WordPress community, and how it’s going to disrupt so many of the roles which WordPressers currently occupy.

    We talk about the recent rise of ChatGPT and the fact that it’s made AI available to almost anyone. In less than twelve months many of us have gone from never touching AI technologies to using them on a daily basis to speed up some aspect of our work.

    The discussion moves on to the rate at which AI systems might evolve, and whether or not they’re truly intelligent, or just a suite of technologies which masquerade as intelligent. Are they merely good at predicting the next word or phrase in any given sentence? Is there a scenario in which we can expect our machines to stop simply regurgitating text and images based upon what they’ve consumed; a future in which they can set their own agendas and learn based upon their own goals?

    This gets into the subject of whether or not AI is in any meaningful way innately intelligent, or just good at making us think that it is, and whether or not the famous Turing test is a worthwhile measure of the abilities of an AI.

    James’ background in biochemistry comes in handy as we turn our attention to whether or not there’s something unique about the brains we all possess, or if intelligence is merely a matter of the amount of compute power that an AI can consume. It’s more or less certain that given time, machines will be more capable than they are now, so when, if ever, does the intelligence Rubicon get crossed?

    The current AI systems can be broadly classified as Large Language Models, or LLMs for short, and James explains what these are and how they work. How can they create a sentence word by word if they don’t have an understanding of where each sentence is going to end up? James explains that LLMs are a little more complex than just handling one word at a time, always moving backwards and forwards within their predictions to ensure that they’re creating content which makes sense, even if it’s not always factually accurate.

    We then move on from the conceptual understanding of AI to more concrete ways it can be implemented. What ways can WordPress users implement AI right now, and what innovations might we reasonably expect to be available in the future? Will we be able to get AI to make intelligent decisions about our website’s SEO or design, and therefore be able to focus our time on other, more pressing, matters?

    It’s a fascinating conversation whether or not you’ve used AI tools in the past.

    Useful links.

    ChatGPT

    Stable Diffusion

    Markov Model

  • WordPress 6.3 Will Introduce A Command Palette

    Last week Gutenberg contributors were engaged in a spirited debate regarding a proposal to rename the new Command Center to Wayfinder. The feature, designed to be an extensible quick search and command execution tool, will land in WordPress 6.3.

    The majority of participants in the discussion were strongly against calling it Wayfinder, as the term doesn’t translate well, nor does it make the feature’s benefits easy to understand. Wayfinder was proposed as a unique name that “has the potential to evoke a sense of curiosity, exploration, and discovery.” There were several attempts to wrap up the discussion with notes on alternatives even when it was apparent that the general consensus was unequivocally not in favor of the term Wayfinder.

    Automattic-sponsored Gutenberg contributor Anne McCarthy commented on the issue with the decision, which she said was reached after consulting project leadership and reading through the comments:

    Let’s move forward with Command Palette.

    Reasoning: easier to translate, consistent across other tooling outside of WordPress, matches current functionality, eases discoverability/understanding of value, and leans generic which matches the concerns raised here.

    Ultimately, we can always discuss renaming if the feature reaches a point of evolution outside of this initial name. As raised above, that would be more worth risking a unique name for than something that exists in other products and that ultimately we want people to quickly understand/find value in. Plus if we hold off on that name for the future, it can create a nice marketing push for something truly unique when/if the time comes. If folks have additional specific concerns around this naming, please speak up sooner rather than later.

    McCarthy also requested other contributors ensure the re-naming is updated throughout the interface for the upcoming release.

    This was an important decision that needed to be made ahead of WordPress 6.3 Beta 1, which was supposed to be released today but was delayed to Wednesday, June 28, due to an unrelated issue. The Command Palette will likely be introduced in blog posts, the 6.3 About page, and countless third-party resources so the proposal urgently needed a conclusion.

    It’s also to the team’s credit that they didn’t force a fancy marketing name and instead landed on the side of the majority of contributors who were in favor of using clear language. The API for the Command Palette is now public and ready for developers to create their own custom commands. Using a term that is easy to understand and translate will engender more global community buy-in, as 52% of WordPress users run the software in a language other than English.

  • WordCamp Europe 2024 Calls for Organizers

    WordCamp Europe 2023 in Athens attracted more than 2,500 attendees from 94 countries, made possible by 112 organizers and 250 volunteers. The event is now looking forward to 2024, which will be hosted by the Italian WordPress community in Torino, Italy, June 13-15. This modern city is located at the foot of the Alps in northwestern Italy and has more than 2,000 years of history to explore.

    WCEU 2024 is calling for organizers who will serve on one of a dozen teams that have been operating for the past few years, including attendee services, budget, design, sales and sponsors, communications, and more.

    Those selected to organize will begin planning WCEU in September 2023 and will work with a distributed team on a weekly basis until June 2024.

    During the 2023 event’s speaker announcements, the WCEU organizing team was criticized for the second year in a row regarding its commitment to diversity. The previous year organizers were called out for the lack of diversity on the organizing team and this year the complaint was a lack of diversity in the speaker selection.

    WCEU 2023 organizers published a transparent account of the various selection processes used for organizers, speakers, media partners, and others involved in the event. The article states that organizers are shortlisted based on their skills, with an effort “to keep gender parity high whilst also selecting people from all available European WordPress communities.” It also states that applicants’ experience and enthusiasm are chief among selection factors but organizers also reach out to encourage underrepresented groups to apply:

    During the selection process we don’t have anything that resembles a “positive discrimination” policy, whereby we choose people based on their race, color, background, gender, sexual identity, or any other attribute; we solely chose people based on their stated experience and enthusiasm to be part of the team…

    Acknowledging that diversity within the Organizing team is important, we reach out to community groups and members before and during the application process, encouraging people to apply where we have historically seen underrepresentation.

    The article concludes with a statement of willingness to modify this selection process if the organization is not able to achieve a diverse lineup:

    WordCamp Europe is an iterative event; each year learns from the last and 2024 will be no different. We cannot take for granted that achieving diversity one year guarantees it the next. As a flagship WordCamp event we may need to positively discriminate to achieve gender parity, or fair representation of communities. 

    The call for 2024 organizers does not identify any changes that have been made to the selection process. Prospective organizers will need to fill out the application form highlighting their skills, experience, and desired role.

  • Reusable Blocks Renamed to Patterns with Synced and Non-Synced Options

    There has always been some confusion and overlap between reusable blocks and patterns. The difference was that reusable blocks can be created and edited in the block editor and then reused in other places – inserted into posts or pages. Block patterns, once inserted, can be edited and are not synced. They give users the ability to apply the same layout to different posts and pages.

    Reusable blocks have now been renamed to patterns, with the option to be synced, which offers the same functionality as the former reusable blocks where all instances can be updated at once. Non-synced patterns are just regular patterns – those that can be edited independently of other other instances that have been inserted. These updates are coming in Gutenberg 16.1 and will be included in the upcoming WordPress 6.3 release.

    WordPress contributor Aki Hamano posted a diagram to Twitter regarding the renaming, which was confirmed as an accurate representation of the changes.

    “Clients already find the pattern and reusable block concept very difficult to grasp,” WordPress developer Mark Howells-Mead commented on the pull request for the renaming. “This change will make things much harder for regular users to comprehend.”

    Gutenberg contributor Paal Joachim Romdahl commented that it would be helpful to have more time to test this in a few versions of the Gutenberg plugin, as WordPress 6.3 beta 1 is expected this week. Learning materials and documentation will need to be updated with very little notice.

    Gutenberg contributor Daniel Richards encouraged contributors to see the change as part of “the great unification,” an effort towards consolidating the many different block types into a single concept and streamlining the content and site editors.

    “In the future it might also be possible for template parts to be considered ‘synced patterns’, and at that point things become much more streamlined and there are far fewer concepts for users to grasp,” Gutenberg contributor Daniel Richards said.

    “So the hope is that this is a first step on the path to making things easier for users, rather than more difficult. But I do realize that for existing users it’s quite a shift.”

    As part of this effort, WordPress 6.3 will also introduce pattern creation in the block editor using the same interface that it previously used for reusable blocks. Pattern creation necessitates having a place for users to view and manage their patterns. WordPress 6.3 will also include a first pass at a Pattern Library inside the Site Editor, which will include both patterns and template parts. Gutenberg designers shared a preview of what this would look like a couple weeks ago:

    image credit: WordPress Design Share June 5-16

    The Potential of Partially Synced Patterns

    In May, contributors began a discussion about the concept of partially synced patterns, which Daniel Richards summarized:

    Today, when you insert a pattern, the blocks from that pattern are completely decoupled and standalone. There’s no way to tell that those blocks originated from a pattern, especially since they can be edited to no longer resemble the source pattern.

    Partially synced mode is different. When a pattern that’s partially synced is inserted, it retains a reference to the source pattern. The blocks within the pattern are locked so that they cannot be removed or reordered and new blocks cannot be inserted (this is called contentOnly locking). Only specific parts of the pattern considered ‘content’ can be edited (denoted by adding __experimentalRole: 'content' to a block’s definition).

    When the source pattern is updated, all instances of blocks that reference the source pattern are updated too (much like a reusable block), but the content values the user entered are retained. The best way to think of this is that the user can update the design of a pattern, but doesn’t lose content that exists in templates and posts.

    This concept will not make it into the upcoming version of WordPress, as contributors are still discussing one of many complex implementations, but it offers a glimpse of what might be more granular control coming to patterns in the future. Partially synced patterns would bring distinct benefits to many CMS and content design use cases where clients may be editing content.

    “I am a site developer for an agency, and am actively making sites for clients using Gutenberg every day,” Eric Michel said. “Probably our biggest pain point right now is that the editor does not handle types of content that are mostly standardized with small content customizations per post – things like contact directories, majors at a university, products in a catalog.

    “For us, the absolute dream scenario is what you are proposing, except with the inclusion of the ability to alter the primary template and have all of the pages that use that template automatically change as well.”

    The discussion on making partially synced patterns possible continues in search of an implementation that will ensure users don’t modify the patterns in ways that destroy the ability to display the retained content. WordPress 6.3 will ship with synced and non-synced pattern options, and partially synced patterns may land further down the road in a future release.

  • Really Simple SSL Plugin Adds Free Vulnerability Detection

    Really Simple SSL, a popular plugin used on more than five million sites for installing SSL certificates, handling website migrations, mixed content, redirects, and security headers, has added a new feature in its most recent major update.

    Version 7.0.0 introduces vulnerability detection as part of a partnership with WP Vulnerability, an open source, free API created by Javier Casares with contributions from other open source, freely available databases. Once enabled, it notifies users if a vulnerability is found and suggests actions.

    “Really Simple SSL mirrors the free database with its own instance to secure stability and deliverability, but of course provides the origin database with an API to enrich, or improve its current data,” Really Simple Plugins developer Aert Hulsebos said.

    The new vulnerability detection feature is not enabled by default, so users will need to enable it in the settings. A modal will pop up where users can configure their notifications and run the first scan.

    When emailed about a vulnerability users can manually respond with an action or set the plugin to automatically force an update (when available) after 24 hours of no response. There are other automated actions the plugin can take based on how users configure the Measures section of the settings.

    For the past several years Really Simple SSL has been providing SSL certificate configuration and installation via Let’s Encrypt as a first pass at securing WordPress sites. To finance this for the free users, the plugin also has a Pro version that handles Security Headers, such as Content Security Policies, which are highly complex for most and not easily configured.

    “We figured that with our reach we could impact security on the web as a whole, by adding features in order of impact on security,” Hulsebos said. “So vulnerabilities, after hardening features specific to WordPress, was next. 

    “The nature of our partnership with Javier and WP Vulnerability is sponsoring the efforts of WP Vulnerability and appointing a security consultant ourselves to this open-source effort to improve, and moderate the open-source database daily. WP Vulnerability does not compensate us, nor does it have a stake in Really Simple SSL. Vulnerability detection is available for everyone and always will be.”

    Because Really Simple SSL started as a lightweight SSL plugin, Hulsebos said they have taken a modular approach to minimize impact on users who only want or need certain features. Following the launch of the new vulnerability detection feature, the plugin’s authors plan to add login security with 2FA to better secure authentication on WordPress sites.

  • WordPress Pattern Directory Updated to Show Curated Patterns by Default

    If you haven’t visited the WordPress Pattern Directory lately, it may look very different from when it launched two years ago. At first there was an emphasis on getting the community to contribute to the resource but the directory has now passed more than 1,500 patterns.

    Contributors are making changes to provide a more curated experience ahead of the inclusion of a new Pattern Directory Explorer that is still in progress. A recent update to the Pattern Directory alters the homepage and category pages to show curated patterns by default, a change that has been a bit confusing for some when returning to the directory.

    The curated patterns are those by WordPress.org – the core bundled patterns. Community-contributed patterns are available as a filter in the dropdown of the directory’s menu.

    There are only 46 core patterns, so some category pages tend to look a little sparse and far less colorful than when community patterns are selected. At the moment, having curated patterns display by default does not offer the best experience for users coming to browse, as Pootlepress founder Jamie Marsland pointed out on Twitter. Automattic-sponsored contributor Rich Tabor responded that there is still more work to be done on providing a better curated experience in the Pattern Directory.

    “This change also prepares to support the Pattern Explorer in the editor,” Automattic-sponsored contributor Kelly Choyce-Dwan said in the announcement. “It’s still in progress, but it will be possible to search through community-submitted patterns directly from the editor.”

    Choyce-Dwan referenced an effort that is currently underway to bring a new flyout to the patterns tab of the inserter inside WordPress, making the modal a place where users can more easily explore and access patterns from the directory.

    There are also related discussions on how themes could create pattern bundles, enabling the possibility of users filtering by theme. In this discussion, Automattic-sponsored contributor Anne McCarthy suggested these pattern bundles could be automatically submitted to the directory upon the theme’s approval, which would make it effortless for theme authors to contribute them.

    Updates to the Pattern Directory’s filtering are part of the redesign work on WordPress.org and more discussions are happening on the Pattern Directory GitHub repository.

  • WordPress.com Makes Monetization Features Available for Free

    WordPress.com has been known to experiment with its pricing from time to time, and the platform announced another major change today. Users on the Free plan are now able to use monetization features without upgrading.

    In the past, WordPress.com users who wanted to earn money on their websites by collecting donations, creating a newsletter, or selling items or subscriptions, had to be on one of the paid plans. These monetization features are now available to all users on all tiers.

    The fee structure varies, based on the user’s plan. Transaction fees are the highest for Free users at 10%, but it gives creators the opportunity to see if they can make money without it costing anything upfront. Commerce plan users ($70/month or $45/month billed annually) don’t pay any transaction fees. Stripe also collects 2.9% + US$0.30 for each payment made to a Stripe account in the US.

    WordPress.com Plan Payment Fees
    WordPress.com Commerce 0%
    WordPress.com Business 2%
    WordPress.com Premium 4%
    WordPress.com Personal 8%
    WordPress.com Free 10%

    Self-hosted WordPress users already have many free plugin options to monetize theirs sites but with that comes the requirement of knowing how to maintain and update their own sites. WordPress.com’s offering is targeted at creators who just want to get started making money online. The company is inching closer to being a one-stop shop for websites, especially as it makes a play for former Google Domains customers who are looking for somewhere to host domains after theirs were sold to Squarespace.

    It’s important to note that creating a full-featured online store is still restricted to Business and Commerce plans. Using Pay with PayPal to accept credit and debit card payments via PayPal is also only available via an upgraded plan.

    WordPress.com’s pricing page has not yet been updated to reflect monetization features as being free – i.e. the Personal plan still lists paid subscribers and premium content gating as an upgrade. It’s possible the team hasn’t edited that page yet or this may be another pricing experiment.

    Expanding the availability of monetization features is likely to be received as a positive change, since users are not losing any features that were previously free. Instead, they have the opportunity to see if they can monetize and then adjust their plans based on their comfort level with the transaction fees extracted.

  • Gravatar Adds New Payment Features for Profiles

    Up until yesterday, the Gravatar (Globally Recognized Avatar) blog lay dormant for nine years, the last post chronicling how the team set out to create a Gravatar app that somehow “morphed into a Selfies app.” Communication went silent after that, although the Twitter account posted occasionally.

    The service has pivoted to become “a personal digital business card” where users can link to various apps and websites that help to establish their identities online.

    Gravatar announced this week that it has launched new payment features for profiles. Users have the option to add links for PayPal, Venmo, and Patreon. The Gravatar team is looking at adding Cash App and more providers in the future.

    On mobile, profiles appear with new “Send Money” and “Share Profile” buttons. Each profile has its own unique QR code that can be copied and shared.

    The payment accounts show up as links that visitors can click through. Users can also display links to cryptocurrency wallet addresses, including Bitcoin (BTC), Litecoin (LTC), Dogecoin (DOGE), Ethereum (ETH), XRP, and Cardano (ADA).

    Profiles can be customized with a background image, photo gallery, social links, and links to verified services.

    Gravatar is used by Slack, Atlassian (owner of Jira and Trello), GitHub, Stack Overflow, and Disqus, serving millions of requests per day. Another new major user is OpenAI, which displays users’ Gravatar images when chatting with ChatGPT. The service is also integrated with every WordPress install, and an Automattic representative confirmed there are no plans to change this.

    Automattic reported that the company does not receive a cut of any payments passed through Gravatar links, nor does it have financial partnerships with any of the payment providers. The company also has no visibility into the transactions that happen through Gravatar payment links.

    During the past nine years, the small Gravatar team has been improving how profile pages look, adding services that can be verified, working to improve the hashing and security of data, and maintaining the infrastructure required to store and serve so many images and profiles.

    “We aren’t currently working on a Gravatar app, but it is something we are considering,” an Automattic representative told the Tavern.

    After the Selfies app was retired, some of the code went into Jetpack and is now part of the app. Jetpack users can manage their Gravatar profile information and avatar photo inside the app.Â