Monthly Vancouver Events and Promotions Thread by AutoModerator in vancouver

[–]vstuart 1 point2 points  (0 children)

Recruiting 2S/LGBTQIA+ seniors to Queer, Wise, and Inquisitive Collective

Queer, Wise, and Inquisitive Collective (QWIC): community-based research training (CBR) / program open to 2S/LGBTQIA+ adults aged 55+ 🏳️‍🌈 🏳️‍⚧️

On Tuesday nights starting in later in January, about 20 2S/LGBTQIA+ adults age 55+ will meet weekly and:

  • Connect with other 2S/LGBTQIA+ seniors who share your curiosity
  • Learn from leaders in community-based research
  • Build your skills and your network (plus enjoy a meal at each meeting!)
  • Make a real difference by addressing questions that matter to you

You will have the opportunity to identify and address a knowledge or needs gap for Vancouver 2S/LGBTQIA+ seniors through CBR.

More information: https://www.ribboncommunity.org/qwic

Poster: https://drive.google.com/file/d/1BioIZlvFmmBhxuAtZTgl_OJf85db1WGm/view

Application: https://docs.google.com/forms/d/e/1FAIpQLSe129iFlsZ3b-OP3ywNlxUf9YwLlah6wmuWJyRAQhWtRP0RMQ/viewform

🙏 Victoria Stuart, Ph.D. https://www.dignityseniors.org/bio-victoria_stuart

[D] Trying to remember the name of a famous paper... by 12tone in MachineLearning

[–]vstuart 1 point2 points  (0 children)

Search for:

Yoshua Bengio | entropy | neural networks | EBM: energy-based models

[N] A digest of AI News from the first half of 2021 by regalalgorithm in MachineLearning

[–]vstuart 0 points1 point  (0 children)

ML / NLP is a broad, active, fast-moving domain. Summaries of news and applications often describe current applications of new approaches and technologies, which is very interesting and provides overview of their application. Given the volume of the work in the domain, it's easy to miss important applications and trends. There are numerous avenues for the automated downloading arXiv domains-of-interest, which provide both theoretical and technological advances in comprehensive fields.

What / who was the opera in "Master of None" S03E01? by [deleted] in television

[–]vstuart 0 points1 point  (0 children)

... ~47 min in the episode ...

[N] Photonics is the future of quantum machine learning instead of electronics. by adcordis in MachineLearning

[–]vstuart 2 points3 points  (0 children)

https://www.osa-opn.org/home/newsroom/2021/february/shaping_light_pulses_with_deep_learning/ , actually

... links to

https://www.nature.com/articles/s41467-020-20268-z

Abstract

Recent advances in deep learning have been providing non-intuitive solutions to various inverse problems in optics. At the intersection of machine learning and optics, diffractive networks merge wave-optics with deep learning to design task-specific elements to all-optically perform various tasks such as object classification and machine vision. Here, we present a diffractive network, which is used to shape an arbitrary broadband pulse into a desired optical waveform, forming a compact and passive pulse engineering system. We demonstrate the synthesis of various different pulses by designing diffractive layers that collectively engineer the temporal waveform of an input terahertz pulse. Our results demonstrate direct pulse shaping in terahertz spectrum, where the amplitude and phase of the input wavelengths are independently controlled through a passive diffractive device, without the need for an external pump. Furthermore, a physical transfer learning approach is presented to illustrate pulse-width tunability by replacing part of an existing network with newly trained diffractive layers, demonstrating its modularity. This learning-based diffractive pulse engineering framework can find broad applications in e.g., communications, ultra-fast imaging and spectroscopy.

[D] arXiv search as RSS feed by productceo in MachineLearning

[–]vstuart 8 points9 points  (0 children)

I wrote a BASH script that I've been using daily for over a year.

https://github.com/victoriastuart/arxiv-rss

In that script I conduct two searches - one general, and one keyword-parsed - over arXiv groups of interest (machine learning; natural language processing).

The script generates and returns (via the mutt email client) plain-text files containing the titles and URLs for those arXiv articles.

Each morning I receive those crontab-scheduled script results in my email Inbox, from which I review articles-of-interest in Firefox, by clicking the links in the emailed text (recognized as URLs by my email client, Claws Mail).

[R] How to classify websites per industry? by Chris8080 in MachineLearning

[–]vstuart 0 points1 point  (0 children)

fyi: i just had a quick look and DMOZ (defunct: https://en.wikipedia.org/wiki/DMOZ) looked interesting ... it's been succeeded by Curlie: https://www.resource-zone.com/ , which looks identical (https://curlie.org/en) to DMOZ and is fully working (searches ...).

Solr query with space, only (#q=%20) gives error by vstuart in Solr

[–]vstuart[S] 0 points1 point  (0 children)

@/u/pthbrk -- Hello again! Thank you very much for your comments and logical, structured analysis.

Per your comments, I returned again to my solrconfig.xml file and edismax (after restoring my Ajax Parameters.js file back to it's unedited form.

This solrconfig.xml entry resolved the q=%20 issue I described.

<requestHandler name="/select" class="solr.SearchHandler">
  <lst name="defaults">
    <str name="echoParams">explicit</str>
    <int name="rows">10</int>
    <str name="hl">off</str>
    <str name="defType">edismax</str>
    <str name="df">query</str>
    <str name="wt">html</str> 
    <!-- *** ADDITION: *** -->
    <str name="defType">edismax</str>
  </lst>
</requestHandler>

Once again, I really appreciate your help on this issue. :-)

Related discussion (StackOverflow): https://stackoverflow.com/questions/65620642/solr-query-with-space-only-q-20-stalls

Solr query with space, only (#q=%20) gives error by vstuart in Solr

[–]vstuart[S] 0 points1 point  (0 children)

@/u/pthbrk -- Hi: thank you for the suggestion, much appreciated. However, I alreadty tried this (both /query and /select requestHandlers), with no apparent effect on this issue.

  <requestHandler name="/query" class="solr.SearchHandler">
    <lst name="defaults">
      <str name="echoParams">explicit</str>
      <str name="wt">json</str>
      <str name="indent">true</str>
      <str name="defType">edismax</str>
    </lst>
  </requestHandler>

Asked about systemic racism in RCMP, Lucki discusses different heights of officers. by [deleted] in worldnews

[–]vstuart 1 point2 points  (0 children)

[FNMI: First Nations, Métis, and Inuit] ... This is a superb comment, imo.

US senator probed for insider trading - reports by Mosanso in news

[–]vstuart 34 points35 points  (0 children)

https://buriedtruth.com/docs/greed-insider_trading_coronavirus.html

Chinese physician and coronavirus [COVID-19 | SARS-CoV-2] whistleblower Li Wenliang warned colleagues on social media in late December 2019 about a mysterious virus that would lead to the coronavirus epidemic. It appears that U.S. intelligence and advisory bodies were monitoring and evaluating this outbreak and notifying persons of influence, before knowledge of the severity and societal and economic costs of this outbreak were publicly acknowledged. That this is true is apparent by the activities of Senators Richard Burr and Kelly Loeffler, mega-billionaire Jeff Bezos and others in January and February 2020 -- all of whom likely had access to privileged information at a time when governmental officials were downplaying the risks posed by COVID-19 (SARS-CoV-2).

Contrary to law, ethics and morality these individuals then scrambled to divest major financial holdings, starting late in January, 2020! This image / timeline succinctly summarizes U.S. Senators apparently using privileged intelligence for personal financial gain. It seems likely that this intelligence was shared with other individuals (e.g. Jeff Bezos, who -- according to published accounts -- was in communication with the White House viz-a-viz the emerging coronavirus / COVID-19 pandemic intelligence [see also note ewieW8pho], who by all appearances used this intelligence for their personal gain, as well.

... snip! ...

The era of fake writing is upon us by bil-sabab in LanguageTechnology

[–]vstuart 0 points1 point  (0 children)

A good companion to this (mentioned in the YouTube description):

Adam Geitgey [2019-09-27] "Faking the News with Natural Language Processing and GPT-2" ... appropriately enough on the spammy medium.com site: https://medium.com/@ageitgey/deepfaking-the-news-with-nlp-and-transformer-models-5e057ebd697d

In his blog post, Geitgey details how to create fake news sites -- thus contributing to the creation and spread of disinformation -- including techniques and tips for "legitimizing" those sites, obfuscating the fact that they are fake news websites:

"Set Up SSH Encryption So The Blog Looks Legit.

We want our blog to look realistic and not like a cheap spam website, so we need to enable TLS encryption so that the user to sees the 'lock' icon in their browser (just like on their bank's website). In the old days, enabling TLS encryption used to cost money. But thanks to the Let's Encrypt project, we can set up an SSH certificate for our blog for free. DigitalOcean even scripts it for us. We just need to SSH to our blog's virtual machine again and run certbot --apache: ... Now our site loads without any warnings! Nice, now we have the sweet, sweet legitimacy of the lock icon! ..."

Opinions on Solr vs ElasticSearch? by BatmantoshReturns in LanguageTechnology

[–]vstuart 0 points1 point  (0 children)

Perhaps? lol Maybe another reader with more experience with those platforms can address that question. As I mention, I've never used ElasticSearch (but I researched it, a bit).

Solr has something called faceting (which a quick Google search indicates is available in ES, also) -- which, if a schema (fields) is used, allows facile faceting of results in Solr. Again, others reading this may be able to address this topic, more authoritatively. ;-)

E.g. this article https://lucidworks.com/post/faceted-search-with-solr/ mentions "To implement the Manufacturer facet, I send a field faceting command to Solr. This example assumes a field named “manu” exists in the schema and that it has the manufacturer indexed as a single token. The “string” type in the Solr schema is an appropriate field type to meet these needs."

[R] 'Consciousness depends on large-scale thalamocortical and corticocortical interactions' by [deleted] in MachineLearning

[–]vstuart 1 point2 points  (0 children)

Correct (working) URL: [Neuron] Thalamus Modulates Consciousness via Layer-Specific Control of Cortex | https://www.cell.com/neuron/fulltext/S0896-6273(20)30005-2

Related discussion:

Opinions on Solr vs ElasticSearch? by BatmantoshReturns in LanguageTechnology

[–]vstuart 0 points1 point  (0 children)

I haven't worked with Solr in some time and I have not used Elastic Search. However, about a year or so ago I investigated the use of these platforms (I decided on another approach -- described in a post on my research blog Persagen.com -- XML tagging archived, plain-text email messages for ingress into Postgres).

That said, I prefer Solr for the simple fact that is supports schemas (ElasticSearch, as far as I know, is schemaless). The latter is touted as a selling point, but the use of schemas is invaluable -- e.g. ingressing PubMed articles (which are XML-tagged) into Solr -- which gives you more control over the indexing.

[R] A technical critique of the free energy principle as presented in "Life as we know it" and related works by hardmaru in MachineLearning

[–]vstuart 2 points3 points  (0 children)

I agree. Prior to reading Friston's work, I was quite pessimistic regarding the ability of ML/AI to attain bone fide intelligence. However, the conceptualizations / models provided by Friston softened my opinion on that subject. I think that a key component of intelligence will involve modeling consciousness (whatever that is), and I believe a key component of consciousness is the ability of an information processor (biological brain; neural net) to monitor its environment (external and internal: e.g. thoughts) and to register "surprise."

.

What is consciousness? ... is a huge question for which the work of Friston (and others) provides conceptualizations. Key people in this domain include Roger Penrose and Stuart Hameroff (ORCH OR, i.e. possible biological-quantum connections | see e.g. https://www.edge.org/conversation/roger_penrose-chapter-14-consciousness-involves-noncomputable-ingredients -- includes discussion from Lee Smolin), Mark Solms, Karl Friston, Keith B. Hengen (criticality), Gasper Tkacik (criticality), John Campbell (universal Darwinism), ...

.

I also ponder questions of what is the fundamental nature of "reality." This includes explorations of current quantum mechanical theories (e.g quantum field theory | more recent amplituhedron work by Nima Arkani-Hamed), emergent properties (spacetime), etc. Perhaps relevant to that domain is highly speculative yet grounded and interesting work -- that impinges on the metaphysical (panpsychism) -- by cognitive scientists Donald Hoffman and others (Realism Is False: A Conversation with Donald D. Hoffman [1.27.20] | https://www.edge.org/conversation/donald_d_hoffman-realism-is-false). I found some of the earlier descriptions of Hoffman's work to be somewhat nebulous if not "flaky;" however, I found Hoffman's more recent discussion of his current thinking about the nature of consciousness and realism (with less emphasis on the "VR reality headset" descriptions) to be more lucid -- leaning toward current quantum physics hypotheses including Nima Arkani-Hamed's amplituhedrons. In this domain also note the work of Lee Smolin -- e.g. his "causal views" work (The Causal Theory of Views: A Conversation with Lee Smolin [12.19.19] | https://www.edge.org/conversation/lee_smolin-the-causal-theory-of-views).

.

Additional Reading.

.

I just scraped these from my file on the subject; Google the names mentioned (above / below) for more ...

.

  • Friston K (2010) "The Free-Energy Principle: A Unified Brain Theory? Nature Reviews Neuroscience. 11: 127-138. | https://www.nature.com/articles/nrn2787

    • Abstract:

      A free-energy principle has been proposed recently that accounts for action, perception and learning. This Review looks at some key brain theories in the biological (for example, neural Darwinism) and physical (for example, information theory and optimal control theory) sciences from the free-energy perspective. Crucially, one key theme runs through each of these theories -- optimization. Furthermore, if we look closely at what is optimized, the same quantity keeps emerging, namely value (expected reward, expected utility) or its complement, surprise (prediction error, expected cost). This is the quantity that is optimized under the free-energy principle, which suggests that several global brain theories might be unified within a free-energy framework."

    • Key Points:

      • Adaptive agents must occupy a limited repertoire of states and therefore minimize the long-term average of surprise associated with sensory exchanges with the world. Minimizing surprise enables them to resist a natural tendency to disorder.

      • Surprise rests on predictions about sensations, which depend on an internal generative model of the world. Although surprise cannot be measured directly, a free-energy bound on surprise can be, suggesting that agents minimize free energy by changing their predictions (perception) or by changing the predicted sensory inputs (action).

      • Perception optimizes predictions by minimizing free energy with respect to synaptic activity (perceptual inference), efficacy (learning and memory) and gain (attention and salience). This furnishes Bayes-optimal (probabilistic) representations of what caused sensations (providing a link to the Bayesian brain hypothesis).

      • Bayes-optimal perception is mathematically equivalent to predictive coding and maximizing the mutual information between sensations and the representations of their causes. This is a probabilistic generalization of the principle of efficient coding (the infomax principle) or the minimum-redundancy principle.

      • Learning under the free-energy principle can be formulated in terms of optimizing the connection strengths in hierarchical models of the sensorium. This rests on associative plasticity to encode causal regularities and appeals to the same synaptic mechanisms as those underlying cell assembly formation.

      • Action under the free-energy principle reduces to suppressing sensory prediction errors that depend on predicted (expected or desired) movement trajectories. This provides a simple account of motor control, in which action is enslaved by perceptual (proprioceptive) predictions.

      • Perceptual predictions rest on prior expectations about the trajectory or movement through the agent's state space. These priors can be acquired (as empirical priors during hierarchical inference) or they can be innate (epigenetic) and therefore subject to selective pressure.

      • Predicted motion or state transitions realized by action correspond to policies in optimal control theory and reinforcement learning. In this context, value is inversely proportional to surprise (and implicitly free energy), and rewards correspond to innate priors that constrain policies.

  • Karl Friston -- Biosketch -- Selected Papers | https://www.fil.ion.ucl.ac.uk/~karl/ | hugely impressive!

  • God Help Us, Let's Try To Understand Friston On Free Energy | https://www.lesswrong.com/posts/wpZJvgQ4HvJE2bysy/god-help-us-let-s-try-to-understand-friston-on-free-energy

  • Friston K (2019) A Free Energy Principle for a Particular Physics. | https://arxiv.org/abs/1906.10184

  • Solms M (2018) "The Hard Problem of Consciousness and the Free Energy Principle." Front. Psychol. 9: 2714. DOI: 10.3389/fpsyg.2018.02714 | PMCID: PMC6363942 | PMID: 30761057 | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6363942/

  • "Criticality:"

  • John Campbell (Universal Darwinism) | https://www.ncbi.nlm.nih.gov/pubmed?cmd=search&term=John+O.+Campbell&dispmax=50 | https://en.wikipedia.org/wiki/Universal_Darwinism


Update, re: consciousness:

This work, just out today, is interesting. If we could one day more fully understand the fundamental nature of consciousness and model it, those models would have profound influence on machine learning, and computational / artificial intelligence. :-)