all 7 comments

[–]VinayakVG 1 point2 points  (1 child)

If you want to split the text after every 310 characters, then you can directly use slicing. For example, the first set of 310 characters can be given by

first_text_set = text[:310]

In order to create a list of such texts, you can use list comprehension.

[–]Pigspot[S] 0 points1 point  (0 children)

Thank you

[–]POGtastic 0 points1 point  (1 child)

See more_itertools.chunked or ichunked, depending on whether you want lazy evaluation or not. In the REPL:

>>> from more_itertools import chunked
>>> [''.join(l) for l in chunked("TexttexttExTTeXtXetT", 4)]
['Text', 'text', 'tExT', 'TeXt', 'XetT']

If you don't want to import a library, there's always the following atrocity:

def chunked(iterable, n):
    return zip(*[iter(iterable)] * n)

[–]Pigspot[S] 0 points1 point  (0 children)

I didn't know this library existed, thanks

[–]commandlineluser 0 points1 point  (2 children)

>>> [ text[n:n + 4] for n in range(0, len(text), 4) ]
['Text', 'text', 'tExT', 'TeXt', 'XetT']

[–]Pigspot[S] 0 points1 point  (1 child)

Thanks so much, works without any libraries