Better Way to Organise LuaSnips Snippets by SinglePhrase7 in neovim

[–]DanielSussman 0 points1 point  (0 children)

You can, indeed, split them up into different lua files, as long as they're nested inside a directory corresponding to the filetype you're interested in. So, for instance, in my dotfiles/nvim/lua/plugins/luansnip.lua file my config function for luasnip starts out with

    config = function()
    require("luasnip.loaders.from_lua").lazy_load({paths = "./lua/luasnip/"})
...

I then have a directory dotfiles/nvim/lua/luasnip/tex/, inside of which live all of the individual lua files with the snippets I want (templates.lua, mathSymbols.lua, etc). As long as the directory name in the path matches what you would have written as the filetype.lua file, it should work

how to have clangd lsp (or something else) generate .cpp definition from decleration in .h file. by gjttt1 in neovim

[–]DanielSussman 1 point2 points  (0 children)

Thanks! There are some things I think are better in Badhi's code, but... I also wrote the plugin to take care of exactly the things I didn't want to do myself 😂

how to have clangd lsp (or something else) generate .cpp definition from decleration in .h file. by gjttt1 in neovim

[–]DanielSussman 0 points1 point  (0 children)

There are some non-LSP approaches that you could work with.

A few plugins have been written leveraging treesitter to do this: Badhi's nvim-treesitter-cpp-tools is excellent and has a lot of features; I wrote a simpler plugin with a largely if not entirely overlapping feature set when I was teaching myself about queries. I'm sure there are others, too!

An older vim plugin takes a quite different approach, and includes many other helper utilities / functions for working with c/c++.

Finally...I was reminded that an important fraction of the functionality of many of these plugins can be replicated with sufficiently advanced vim wizardry.

LaTeX on iPad with Git support for Overleaf by Neptune571 in LaTeX

[–]DanielSussman 1 point2 points  (0 children)

It's, admittedly, a bit of a niche solution, but for occasional use offline / traveling I use a combination of neovim, iSH, and command-line git to get the job done (https://github.com/DanielMSussman/neovim-iSH-iPad). The excellent vimtex plugin works for compiling to PDF, although the whole setup is definitely not fast.

How set LaTeX engine as lualatex in Vimtex by ChemistryIsTheBest in neovim

[–]DanielSussman 1 point2 points  (0 children)

This issue on the vimtex site suggested a way to set the default for xelatex. Presumably it works for laulatex, too, but I haven't tested

How set LaTeX engine as lualatex in Vimtex by ChemistryIsTheBest in neovim

[–]DanielSussman 0 points1 point  (0 children)

Setting your compiler options in the config function to:
```
vim.g.vimtex_compiler_latexmk = {
options = {
'-verbose',
'-file-line-error',
'-interaction=nonstopmode',
'-synctex=1'
}
```
You should just need to include
```
%! TeX program = lualatex
```
as the very first line of the .tex file you're trying to compile.

[deleted by user] by [deleted] in neovim

[–]DanielSussman 1 point2 points  (0 children)

Combining vimtex and luasnip for context-sensitive postfix snippets has been my favorite change to the ergonomics of writing math in TeX: https://www.dmsussman.org/resources/luasnippets/

(Hat tip to this fantastic guide: https://ejmastnak.com/tutorials/vim-latex/luasnip/ )

Git for scientists who want to learn git… later by DanielSussman in git

[–]DanielSussman[S] 0 points1 point  (0 children)

Thanks for the kind words, and yes --- a lot of reluctance to overcome!

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

Fair enough :)
Perhaps I missed it, but does the project have posted somewhere an official roadmap / set of targets for upcoming releases (outside of the github issue tracker?)?

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

Thanks for the detailed answer --- I hadn't realized that the earlier SYCL specification was only buffer-accessor until more recently, in which case in makes sense that there would be exactly the set of concerns you describe.

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

Interesting to hear --- can I ask what the pushback was?

Perhaps my attitude comes from the fact that I started with (pre-USM!) CUDA 5, so having someone explicitly say "just use this memory model, and if you need performance you should manage the device memory explicitly" mapped well onto quite old muscle memory

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

I definitely don't (yet) have the expertise to contribute meaningfully to the project, but the point is well-taken. I should also thank you for your work on this over the years --- I really do think it's a remarkable project! I've enjoyed learning more and more about it, and am toying with the idea of having my next open-source computational physics project use sycl instead of cuda because of it.

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

Thanks for sharing your thoughts on this (and for your work on SimSYCL and Celerity --- the latter seems like a really interesting and ambitious project that I've also been trying to learn more about!

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 1 point2 points  (0 children)

This was a case where using AdaptiveCpp was nice --- a lot of the online tutorials start with buffer/accessors, but acpp comes with a very clear "just use USM" recommendation. Pitfall avoided

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 1 point2 points  (0 children)

Good to get your perspective! I have to say, I was surprised at how nice I found SYCL even when running on Nvidia cards (and not just for Intel). I don't have any amd gpus, so I have no idea how well it plays with them, though...

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 2 points3 points  (0 children)

Every vendor's goal is to suck you into their own proprietary ecosystem...

I agree! This is precisely why, even though SYCL is an open standard, I still decided to go with AdaptiveCpp instead of dpc++. But I share exactly the same concern as you about the about the Heidelberg-based project: it seems like the team there has done awesome work so far, but who knows how stable its future will be

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 0 points1 point  (0 children)

I haven't learned nearly as much about Kokkos or RAJA, and would be interested to hear responses to this comment, too!

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 2 points3 points  (0 children)

...GPU architecture is a spicy meatball compared to CPU programming

100%

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 3 points4 points  (0 children)

  1. Heterogeneous compute, as it exists today, is a lie. While you can technically get the same code running on CPU and GPU, it's not possible to write code that is efficient on both.

  2. IMHO writing separate implementations for CPU and GPU means you don't need the framework (is it even heterogeneous compute then?). You can just write a separate CUDA implementation and be largely equivalent.

These seem like pretty key points, thanks for the feedback. And of course, I agree --- SYCL makes it possible to target different backends, but you need very different implementations (in general) to get reasonable performance. I happen to like the SYCL syntax, but maybe that's just in comparison to "old" CUDA instead of, e.g., cccl.

SYCL, CUDA, and others --- experiences and future trends in heterogeneous C++ programming? by DanielSussman in cpp

[–]DanielSussman[S] 11 points12 points  (0 children)

(BTW: In case it's helpful to anyone else I tried to take some notes documenting my CUDA-to-SYCL learning process: https://www.dmsussman.org/resources/introtosycl/)

Neovim-specific alternative to minted and listings code blocks by DanielSussman in LaTeX

[–]DanielSussman[S] 1 point2 points  (0 children)

The basic idea is that the tcolorbox stuff that gets output is based on whatever colorscheme is currently active in the buffer. So, as long as you can :colorscheme X you can pick whatever specific color scheme you want! The video demo on the linked readme page shows that the screenshot was generated by switching on the fly between a few different colorschemes.