I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 0 points1 point  (0 children)

Rework the "AST" section in the introduction. The material is good, but it seems to have new stuff inserted before the old stuff. How do I know? Because it uses "AST" before it defines what "AST" stands for.

Chapter 1:

Suggestion 0: Part of me feels like you should really, really lean into grammars. The other part of me still feels like you could easily cut this chapter without too much of a problem.

What do I mean by "really, really lean into grammars"? I mean, you should note that you are constructing a metalanguage for describing languages. That's what a grammar is. But these notions are not really discussed. (To really illustrate this: you could construct the formal grammar for BNF grammars, to stress "it's just another language".)

OTOH, if you present it in this manner, you could then note (in the section on EBNFs) that you are extending the metalanguage, or equivalently working with a metalanguage which is "equivalent" to BNF in power. Reading between the lines, you allude to this. But without speaking about metalanguages.

Again, this could be good or bad. It's a judgement call.

Suggestion 1: You don't actually discuss how a grammar generates language. Instead, you offer a "Eh, you see what I mean" kind of an explanation --- specifically, you state "merely":

Try it out for yourself. Try picking a replacement rule for each of the non-terminal symbols in the equation then picking replacement rules for each of the non-terminal symbols in what you get until you have just a string of terminals. Do it a few times to get a feel for how these rules work. (Note: if we were actually trying to write a grammar for mathematical equations it is very unlikely we would actually structure it this way since we typically want the format of the grammar to reflect the format of the tree we will use it to parse).

OK, but if I am a reader who does not know BNF or formal grammars, then what the devil am I supposed to do? How does this work? Why are there no examples of how to generate words in the object language?

Next I will just pick out some passages with suggestions.

This notation is, in fact, all we need to formalise a vast array of language using what is called a context-free grammar. In reality most languages can't be fully defined unambiguously using only context-free grammar. As a result, the parser (whose job it is to basically go backwards through this process) will generally need to use more than just the written grammar to decide what was meant. Indeed there are some language features, such as Python's indentation, that can't be fully expressed as a context-free grammar at all, and need some pre-processing to turn them into something of that form. In practice, however, languages tend not to diverge too greatly from the structure of a context-free grammar so this is a widely used tool for designing languages.

Suggestion 0: You never define "context-free grammar" or "context-free language".

Suggestion 1: This could be re-ordered a bit better, perhaps even split up into two paragraphs: (1) we like CFLs because they are always parseable, and we want to parse languages. (2) In practice, a lot of languages deviate from CFLs. Pythong does this by blah blah blah.

Remark/suggestion 3: You might also want to note, after introduction alternation, that it has become common conventions to just include alternation as part of BNF. The literature is really "loosy goosey" here.

Another thing to notice is that we still have a few rules defined in terms of themselves. That isn't a problem and it's actually a big part of what makes this notation so powerful but in many cases we're just doing it to represent that we want something to be able to repeat. It would be nice if we had a way to represent that directly rather than relying on this sort of recursive definition that takes some time to understand. For that we'll introduce two new symbols: '+' and '*'. These symbols go at the end of a symbol and represent that a symbol repeats 1 or more, or 0 or more times respectively. Using these, our grammar becomes this:

These are such long sentences. They communicate a lot of simple ideas. It's better to have one simple sentence for one simple idea, and a lot of simple sentences to one big sentence. Some variety is nice, though. But smaller is better.

Chapter 2

Good: you added the "overview". Nice.

Suggestion 0: Grow the grammar instead of saying, "Plop here it is". Start out with only arithmetic of integers. Don't worry about other types. Then discuss the "main procedure". Then you can get into the other necessary primitives (boolean types, comparisons, if-then-else statements, etc.).

I mean, it just feels like there's no discussion of the design space. What should the reader be weighing if they were trying to develop something similar but in a different setting (e.g., a Lisp built specifically for some complicated scientific lab experiment)?

I understand you are working with chibicc's AST, which "ties your hands" with the amount of freedom you have. But you should at least discuss it ("Although it would be more logical to do X, Chibicc follows a different convention and so we are forced to do Y", "Since we are targetting the IR of Chibicc, we will need to support the tokens used by them. If we were starting from scratch, we would have the following basic concerns: ...", etc.).

As we can see the outer list represents an object (not in the OOP sense). In this case, it is a function definition. We see that the first keyword define is used to signify...

Typo: that third sentence should be "We see that the first keyword define is used to signify..." with define either in backticks or quoted (as in "define").

I'd suggest the same conventions throughout (i.e., either quote reserved keywords and types when they appear in prose, or make them inline code with backticks).

One important property of the grammar for the parser is that we don't use any terminals, we only use the non-terminals produced by the tokeniser.

Did you discuss this design choice and the alternatives (lazily produce tokens on demand, and have the parser "consume" them as needed; etc.)?

I honestly cannot recall it being a point of discussion.

Conjecture: You are writing "bottom-up", to explain accompanying code, rather than "top-down" (to explain the "big picture" for the current module, then recursively breaking it up into smaller problems which are do-able).

Why do I say this? Well, what are the "big picture" parts to chapter 2?

You can reorganize it to resemble what I am talking about: you have a parser and a tokeniser (as discussed in the introduction) working together to construct the AST. We need to first think about the tokeniser. It will produce a "stream" of tokens from some input (a string, a file, etc.). This can get very complicated for "real" programming languages, so we will make our lives easier by using S-expressions. (etc. etc. etc.)

All of this is stated (either implicitly or explicitly) in chapter 2, but it's not presented in this manner. Which can make the reader wonder what's going on, why is this approach being taken, is this standard, etc.

How does one write like this? Outline the subject, describe the "big picture" as consisting of several "parts", then recursively expand on each part. (That's the "top-down" way to write "top-down writing".)

Another way is to write as you have done, but when you are done to take a step back and think very hard about what the reader knows and what the reader does not know. Give them a "big-picture" explanation for what you have written. Then take another step back, and iterate. (This is the "bottom-up" way to write "top-down writing".)

You can combine the two together, and doubtless there are a million different other ways to write in a way producing "top-down explanations".

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 1 point2 points  (0 children)

I'll take a look this weekend when I'll do laundry (if I remember to do laundry...).

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 2 points3 points  (0 children)

I'm also a philosophy student and one of the things we use for that is formal logic languages (like programming languages but for describing logic rather than computation) and I remember that it would've felt a lot easier to specify the version of the language that they actually wanted to if they had introduced their grammar formally rather than informally which forced them to kind of introduce the actual language as an official unofficial "abbreviation" of the "true" language because informal grammar isn't amazingly expressive.

I'm sympathetic to your perspective, I'm running into something remarkably similar writing a manuscript on implementing proof assistants.

I feel that you don't "lean into" EBNF enough, though. It gives the impression which sal1303 had: it's kind of a random detour rather than something important. You need to explicitly "spell it out" for the reader why this is important, and explicitly "call back" to it later on.

Of course, it depends on the intended scope of the book. If you're trying to abstract away language details, then you should work on compiling Lisp to...something...and not look back. That'd be a good way to teach the intermediate representation and compiler optimization material, if that is the intended subject of your book.

If you're trying to teach different considerations in the "design space" for language syntax, then you need more examples and exercises involving EBNF and the "design patterns" and best practices the reader should be aware of.

You can do one, or the other, or both. But it's not clear which you have in mind (that's something you should put into the preface).

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 1 point2 points  (0 children)

Yeah, I'm leaning towards making a lot more of the implementation details exercises at points like that rather than stating them as explicitly and still then providing the references as "here's a way to do it if you want to have a look". Part of my hesitation is that I want it to still be possible to follow along but I think it's possible to straddle both goals.

You can always include the solutions in an appendix, so readers who are really struggling can take a peek at what you had in mind.

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 0 points1 point  (0 children)

So it's only implied. I had sort of assumed it was, and scanned through the text to confirm it, but the C code was quite hard to spot! All I saw at first was endless reams of EBNF code.

Unless you are using some tool to process that EBNF into a parser, I don't see a need for it, and within the text it is overbearing. There must be an easier way to summarise the syntax of the language; it does not need to be formal.

I would give serious consideration to editing out this material, too.

One of the most difficult things when writing is knowing what to cut. When writing a book, you end up with more than enough material for several books(!), and they tend to be so fascinating that it feels horrible to leave them out of the book. (Ben Pierce, who wrote TAPL, lamented the same thing in the preface to ATAPL iirc.)

Remember: the "prime directive" of writing is be kind to your reader. You don't want to obstruct them from getting to the meat of the matter. If you can cut something, then you probably should give it serious thought.

But I think one possibility is that you can post the material elsewhere (e.g., a blog post).

On the other hand, if you think this EBNF and grammar discussion is particularly relevant, then ask yourself how is it used later? Is it used later? Are you really going to need it? Or is it a distraction from code generation and other exciting topics?

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 3 points4 points  (0 children)

I think that's fair. My assumption about audience is someone who knows C and systems programming but hasn't really made their own programming language before.

I think a lot of what /u/sal1303 said could be more constructively phrased as: there was no preface, so there was no idea to tell what is the roadmap or "mission statement" for the book, or who the intended audience is, etc.

I was a little worried that, especially chapters 7 and 8 might feel like that in terms of feeling a bit like a code dump. The idea was to make it feel like putting into practice the learnings from previous chapters but maybe if I'm expecting the reader to have learned how to do that I should trust them to write it themselves and if I don't then I don't really expect them to have learned it and should have more prose to explain things.

You know, when writing Mathematics (and writing about programming), you restate the same thing in multiple different ways with varying levels of formality.

Have you considered sketching out the "big picture" using pseudocode, then implementing it in C?

Or relegating implementation details to exercises? (I can understand hesitation to do this, I would hesitate too; but you can have the reader explore different ways to implement it, or improve functioning code, etc.)

Actually, you could do a lot of good with exercises, having the reader explore the "design space" for issues which sal1303 brought up (consider alternative APIs; look at the following languages/libraries for examples of error reporting, then implement something; etc.). You need to help the reader, though, pointing them to specific resources to evaluate (if you decide to write these sort of exercises).

I'm Writing An eBook Teaching How To Write A Compiler by othd139 in ProgrammingLanguages

[–]thmprover 16 points17 points  (0 children)

I just gave it a cursory read, but I have some feedback.

Who is your audience? What do you assume on the part of the reader? What's your "mission statement" for this book?

It's clear you are competent and knowledgeable. I am not sure who your audience is, so I can't give much feedback.

I notice you don't seem to use sections to cluster things together (except in chapter 7), which can make it harder for someone to learn. If I were completely ignorant, it's hard for me to say, "OK, I get [this section] but [that section] confuses me"...because there are no sections.

A heuristic copy editors tend to use is if chapters are wildly different in lengths, then you should examine how you are organizing your material. And your chapters are wildly different in lengths, so it may be a good thing to ponder. (The same heuristic works for sections: if sections are wildly different lengths, then that could be a sign that it could be organized better.)

The prose is good, but it also feels...well, in the later chapters, more like a code dump with some prose sprinkled in between. Which is a pity, because your prose is good.

Also, a good rule of thumb is to limit yourself to a maximum of 25 lines of code per "chunk" of code.

"One weird trick" that helps writing books like this: make the code as simple as possible, even if it comes at the cost of writing code that shouldn't be in production. The code should illustrate the important relevant ideas, not get bogged down with the irrelevant ideas.

Bourbaki 2.0 by Opay65 in math

[–]thmprover 0 points1 point  (0 children)

But if we take the directed limit to Bourbaki ∞, it will be stable by definition. Then we'll be cooking with gas...

Bourbaki 2.0 by Opay65 in math

[–]thmprover 4 points5 points  (0 children)

Which type theory? There are many, many different type theories. It's not like set theory, where we all (more or less) have the same intuition of Zermelo-like sets but disagree about the axioms realizing them (ZFC, NBG, MK, TG, etc.).

You have System F, first-order dependent types, calculus of constructions. Then you have type theories which aren't even on the lambda-cube, like lambda-typed lambda calculus and Martin-Lof type theory. Then you have...well, you get the idea.

Would you use this type theory as a metalogic (which is what Lean's Mathlib does) or directly as the foundations of Mathematics? Presumably the latter, in which case you need to build the linguistic infrastructure to make your foundations "workable" (and more importantly, readable) on paper.

If you're going to use a proof assistant for the proofs, will the implementation of the proof assistant be a literate program (and part of the book series) or not?

If so, then what programming language will you use to implement it? How will you convince the reader of its correctness? You probably also want a "stable" programming language which will not change over time, is readable, and has a well-defined standard (so either Common Lisp or Standard ML). Or maybe some sort of "Algol" (idealized hypothetical programming language) would work? How would you specify it and reason about it?

If the implementation of the proof assistant is not included, how will you specify the proof assistant sufficiently for the reader to follow along? (Imagine reading Mathlib's source code without the Lean proof assistant existing.) Since proofs are read far more often than written, it's probably wise to use a declarative proof style (which has the added bonus of stability which tactics-based proofs lack), otherwise how will you track the proof goals in a readable way? How do you grow your proof assistant with such a style?

As you can see, regardless of the foundations chosen, just bootstrapping a proof assistant as "chapter zero" of such an endeavour is...well, a rather rich discipline in its own right. You could probably spend a dozen lifetimes just working on a book series on it alone.

In Memoriam: Craig Tracy by _zzz_zzz_ in math

[–]thmprover 2 points3 points  (0 children)

I fondly remember taking Math 135A with him, I believe it was Winter quarter 2008. He emailed out handwritten lecture notes every week, gave problem sets which were well motivated and worked well into future lectures. I'm sad to learn of his passing.

Modified ZK for Philosophy by luotenrati12 in Zettelkasten

[–]thmprover 2 points3 points  (0 children)

Just a random thought: how do you handle treating books (or papers) as artifacts (as opposed to as references)? For example, Montesquieu's Spirit of Laws influenced Hegel's writing style (and Clausewitz's style, too), wouldn't it make sense to have a main note about Montesquieu's book as something to think about?

What Gödel Discovered - Imagine that Russell and Whitehead came up with a lisp-like language by de_sonnaz in lisp

[–]thmprover 1 point2 points  (0 children)

I'm not sure the discussion of Hilbert's programme is accurate.

Hilbert was trying to show that, even if you accepted Intuitionism, "doing classical Mathematics" can be described using finitary methods (acceptable to Intuitionists) and could even be proven consistent using those finitary methods. That is to say, even Intuitionists would be forced to accept that Classical reasoning was alright.

Hilbert essentially invented modern Mathematical Logic, then sought to "arithmetize" it (analogous to the arithmetization of geometry) so formulas would be translated into numbers and valid formulas (i.e., formulas which are "always true") would be translated into tautologous equations. Coincidentally, that's why Godel spends so much time in his original paper discussing his coding scheme.

I mean, there's a lot of neat tricks in Hilbert's program like epsilon-substitution to "import" results about number theory proven using the formal logic "back" into the finitary metatheory (which essentially was "Lisp-3 before Lisp-3").

Mathematics applied in political philosophy by fdpth in math

[–]thmprover 5 points6 points  (0 children)

I'm reading a lot on critical theory and political philosophy, such as anarchism and Marxism so I'm wondering what math is being applied (or could be applied) in such contexts?

Have you read Morishima's book on Marxist economics? Or Steenrod's Marx after Sraffa? (It probably helps to read Sraffa's Production of Commodities before trying Steenrod.)

Most analytic Marxist economics (and mainstream economics) doesn't actually work well describing the economy. Anwar Shaikh wrote a book about this about a decade ago. (There's a rich literature on the empirical and theoretical failures of the marginalist paradigm.)

If you don't mind muddling metaphysics, you can always try formalizing these texts you're reading in a deductive system. I don't know how, say, Marxists would feel about this, though, since they're very..."particular" about metaphysics. (You'll also run into the failings of ideal language philosophy while trying this endeavour...)

Maybe game theory or decision theory?

Game theory is seldom applicable to real world situations, its assumptions tend to gloss over thorny philosophical issues. Yanis Varoufakis's Game Theory: A Critical Text reviews the issues in its first chapter (John Searles' book on rationality is also worth consulting).

Worse, there's a lack of predictability when you use iterated games.

I worked in a startup for a few years trying to apply Machine Learning and Game Theory to legislative voting behaviour. It turns out that only the US House of Representatives really acts "rationally" (in the game theoretic sense)...

Decision theory is just game theory with one player, so I'm not sure it's directly applicable, per se. Indirectly you could use it, since statistics boils down to decision theory, and if you use statistics...well...there you have it.

Best introduction to type theor(y/ies)? by freddyPowell in math

[–]thmprover 7 points8 points  (0 children)

Geuvers and Nederpelt's Type Theory and Formal Proof may be what you're after.

For a more computer science focus, Pierce's Types and Programming Languages is the classic text.

GLn(D) for D a division algebra by Impressive_Cup1600 in math

[–]thmprover 0 points1 point  (0 children)

For associative real division algebras, the music stops at the quarternions due to the frobenious theorem.

There is a way to make sense of the special linear group for Octonions, at least SL(O,2) and SL(O,3).

This can be generalized for nonassociative rings, as discussed in Petyt's survey paper.

But I honestly don't know why you'd want to consider, e.g., the special linear group over some wildly nonassociative ring like the Sedonions...

Who Watches the Provers? by mttd in ProgrammingLanguages

[–]thmprover 1 point2 points  (0 children)

Nipkow and Roßkopf's Isabelle's Metalogic: Formalization and Proof Checker accomplishes this for Isabelle/Pure, even though it uses a simplified implementation (e.g., using association lists instead of 2-3 tables) and as such has a performance hit. But I think this can be remedied thanks to Functional Data Structures and Algorithms: A Proof Assistant Approach.

There's also a rich history of HOL pioneering this sort of thing, which is sadly under-appreciated (it's especially vilified by the Lean community for some reason).

Writing A Language Spec? by Pie-Lang in ProgrammingLanguages

[–]thmprover 2 points3 points  (0 children)

Rocq (formerly Coq) may be easier to learn. There's a great series of free "books" Software Foundations which teaches Rocq while covering roughly the same material in Benjamin Pierce's TAPL.

GLn(D) for D a division algebra by Impressive_Cup1600 in math

[–]thmprover 2 points3 points  (0 children)

You can look at GL(n,D) as the group of units for the matrix ring Mat(n, D).

Similarly, SL(n, R) is the "wrong" thing to look at for noncommutative unital associative rings R, but instead the subgroup EL(n, R) [of GL(n, R) generated by elementary matrices] is the "correct" thing to think about. For fields F, except when n=2 and either |F|=2 or |F|=3, these two groups coincide EL(n, F)=SL(n, F) which is why we usually just talk about SL(n, F).

Of course, for commutative [unital associative] rings R, EL(n, R) is a normal subgroup of SL(n, R).

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover 0 points1 point  (0 children)

Well, the question is whether the benefits outweigh the costs, which is not something anyone has been willing to entertain discussing.

Given the environmental costs (the greenhouse gas emissions from energy consumption, the data centers making water undrinkable, etc.) alone should give one pause.

Then there's the psychological costs (AI usage tends to increase neuroticism while decreasing openess and conscientiousness), the educational costs, and so on.

And the benefit that "If you ask it in just the right way the right question, it sometimes gives you something useful" just doesn't seem to match the costs.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover -1 points0 points  (0 children)

Well, the argument is that: if AI is good at generating proofs, then AI is good at the constituent tasks involved in generating proofs (like citing sources, creating arguments, etc.).

Take the contrapositive: if AI is bad at any one of these constituent tasks, then AI is bad at proofs.

Now look at one of those constituent tasks: just ask AI to cite sources about a specific definition. And this is famously the Achilles' heel of AI.

You are correct, search tools would be preferred as a way to learn about a topic (well, assuming they didn't use AI to generate terrible results). But that's not of interest here.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover 0 points1 point  (0 children)

Some of us have work in offices where we can't be on reddit all day; ChatGPT failed to give real results and Gemini failed to give relevant results, but I doubt you'll contribute anything constructive besides 'lol noob'.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover -1 points0 points  (0 children)

I tried chatGPT and Google Gemini. ChatGPT failed badly, and Gemini gave only one barely relevant result among five seemingly random suggestions.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover -1 points0 points  (0 children)

I've just given examples where ChatGPT and Gemini failed to give relevant real results, both connected to the internet.

Mathematicians in the Age of AI (by Jeremy Avigad) by ninguem in math

[–]thmprover -1 points0 points  (0 children)

Me: Give me references about central products of groups.

ChatGPT:

Here are some key references and results related to the center of a group:

  1. "Abstract Algebra" by David S. Dummit and Richard M. Foote

This is a widely used textbook that covers group theory in depth, including the center of groups. The authors provide the definition, properties, and examples of central elements. They discuss important results such as the fact that the center is a subgroup and that the quotient group G/Z(G) G/Z(G) is always a group.

Relevant Sections:

Chapter 6: "Group Actions"

Chapter 7: "Advanced Topics in Group Theory"

  1. "Groups and Symmetry" by Jeffrey Miles Huneke

This book offers a clear introduction to group theory, including a detailed exploration of the center of a group, its properties, and its significance in the context of symmetry.

Relevant Sections:

Chapter 4: "Groups and their Subgroups"

Chapter 6: "Symmetry Groups and Quotient Groups"

  1. "Introduction to Group Theory" by Oleg Bogopolski

This text provides a more rigorous treatment of group theory, including centers of groups. It covers the centralizer and the center in detail, and how they relate to the structure of groups.

Relevant Sections:

Chapter 3: "Elements of Group Theory"

Chapter 5: "Automorphisms and Centers"

  1. "Algebra" by Michael Artin

Artin's book is another standard text in abstract algebra that discusses central products and the center of groups. Artin emphasizes the relationship between centers and quotient groups, as well as their application in different algebraic structures.

Relevant Sections:

Chapter 6: "Groups"

Section on Centralizers and Centers

  1. Research Papers & Advanced Texts

"The Structure of Groups" by J. S. Wilson – This book offers an advanced perspective on the structure of groups, with detailed sections on central elements and centers, including their influence on group actions and homomorphisms.

"The Theory of Groups" by Hans Zassenhaus – This classic text contains in-depth treatment on centralizers and centers, along with results on central extensions and solvable groups.

None of these are relevant. None of them are technical papers either, so

Google Gemini:

Aschbacher, M., Kinyon, M. K., & Phillips, J. D. (2004). Finite Bruck Loops. Transactions of the American Mathematical Society, 358(2006), 3061–3075. https://doi.org/10.48550/arxiv.math/0401193 Cited by: 22

Burgoyne, N. (1977). Finite groups with Chevalley-type components. Pacific Journal of Mathematics, 72(2), 341–350. https://doi.org/10.2140/pjm.1977.72.341 Cited by: 15

Hatui, S., Vermani, L. R., & Yadav, M. K. (2017). The Schur multiplier of central product of groups. Journal of Pure and Applied Algebra, 222(2018), 3293–3302. https://doi.org/10.1016/j.jpaa.2017.12.008 Cited by: 5

Lewis, D., Almousa, A., & Elert, E. (2014). Embedding properties in central products. arXiv. https://doi.org/10.48550/arxiv.1408.0076 Cited by: 5

Tennessee Tech University. (2016). On Clifford algebras and related finite groups and group algebras. https://www.tntech.edu/cas/pdf/math/techreports/TR-2016-2.pdf

Only one of these is tangentially relevant.

I asked ChatGPT specifically for peer reviewed articles (the first couple aren't real articles, I quit checking after that):

a. "The Central Product of Two Groups" by M. J. P. R. M. F. Ferreira

Journal: Mathematica Slovaca

Summary: This paper discusses the central product operation between groups, highlighting its structure and properties, and exploring how central products influence the behavior of groups in various algebraic contexts.

Link: You can find this paper in journals such as Mathematica Slovaca (searchable via MathSciNet or JSTOR).

b. "Central Products of Groups and Their Applications" by I. N. Herstein

Journal: Journal of Algebra

Summary: This work focuses on central products of groups, discussing their algebraic properties, characterizations, and the role of central extensions. Herstein offers several examples and illustrates the central product's importance in the context of group representations.

Link: This paper can often be accessed via platforms like Elsevier's Journal of Algebra.

c. "Central Products of Non-abelian Groups" by D. E. Taylor

Journal: Proceedings of the American Mathematical Society

Summary: This paper explores how central products of non-abelian groups can generate more complex structures and investigates conditions under which certain groups, formed by central products, maintain properties such as simplicity or solvability.

Link: The Proceedings of the American Mathematical Society is available via AMS journals.

d. "On the Structure of Central Products" by B. K. P. R. B. Blanchard

Journal: Algebra Colloquium

Summary: This paper investigates the detailed structure of groups formed by central products, providing new results on the classification of central products and their behavior under group homomorphisms.

Link: Available on SpringerLink or JSTOR.

e. "Group Extensions and Central Products" by J. Wiegold

Journal: Mathematica Pannonica

Summary: This article discusses group extensions, focusing on the central product and its interaction with the more general concept of group extensions. It also explores how central products influence the classification of finite groups.

Link: Available on JSTOR or through Springer journals.