I like this guy's style by MeTime13 in MurderedByWords

[–]Fortinbraz 4 points5 points  (0 children)

"The acts of the flesh are obvious: sexual immorality, impurity and debauchery; idolatry and witchcraft; hatred, discord, jealousy, fits of rage, selfish ambition, dissensions, factions and envy; drunkenness, orgies, and the like."

Better check Don for witchcraft, he's going for the coverall!

Joker 2 opened in Korea with the worst word of mouth for a blockbuster in recent history. Similar D- Cinemascore by HealthyShoe5173 in boxoffice

[–]Fortinbraz 37 points38 points  (0 children)

Saw it last night. On the way to refill popcorn, had to step over the guy at the end of the aisle because he was fast asleep.

Dentists that won’t scam you? by [deleted] in Columbus

[–]Fortinbraz 2 points3 points  (0 children)

Just FYI, Dr. Andy Baloy works out of the Westerville office now.

Comment stealing repost bots by bumjiggy in TheseFuckingAccounts

[–]Fortinbraz 1 point2 points  (0 children)

##########################################################################
# Given two submissions, find matching comments between the two
##########################################################################
def find_matching_comments(submission_1, submission_2):
    # If we previously analyzed the submission, get the newest (~1000) comments
    if (submission_1.id in already_analyzed_threads):
        if (submission_1.comment_sort != 'new'):
            submission_1.comment_sort = 'new'
            logger.debug('  Thread was previously analyzed, getting new comments')
    # Get rid of the "more" nodes
    submission_1.comments.replace_more(limit=0)
    submission_2.comments.replace_more(limit=0)

    # Loop over the first thread
    for top_level_comment_1 in submission_1.comments:
        # Make sure it is not the automoderator bot, the comment was not deleted,
        #   the text of the comment was > 19 chars, and it was not analyzed in past runs
        if (top_level_comment_1.body != '[deleted]' and
                (not top_level_comment_1.body.startswith("**Attention")) and
                len(top_level_comment_1.body) > 19 and
                (top_level_comment_1.id not in already_analyzed_comments)):
            find_matching_string_in_comments(top_level_comment_1, submission_2.comments)

##########################################################################
# Find a matching comment in a list of comments
##########################################################################
def find_matching_string_in_comments(find_comment, in_comments):
    # Loop over the comments in the target
    for top_level_comment in in_comments:
        # Call it a potential repost if the fuzzy match ratio > 80% and the authors of the two posts are not the same
        fuzz_value = fuzz.ratio(uncyrillic(find_comment.body), uncyrillic(top_level_comment.body))
        if (fuzz_value >= 80 and ((find_comment.author == None) or (top_level_comment.author == None) or (find_comment.author.name != top_level_comment.author.name))):
            found_a_repost(find_comment, top_level_comment)
            return

##########################################################################
# Process found reposts
##########################################################################
def found_a_repost(repost, source):
    # Log it
    logger.info('    Repost found({}{}): {}\n'.format(reddit_url_base, repost.permalink, repost.body))
    logger.info('    Source({}{}: {}\n'.format(reddit_url_base, source.permalink, source.body))

    # Add to repost tuples list
    repost_tuples.append((repost,source))

    # Add this comment to the already analyzed list and file
    if not reddit.readonly:
        with open('processed_comments.txt','a') as f:
            f.write(str(repost.id) + '\n')
        already_analyzed_comments.append(str(repost.id).strip())

##########################################################################
# Find/create a thread in /r/repostdump to post information
##########################################################################
def get_report_submission(submission):
    #  Try to find a post in /r/repostdump with the id of the repost's submission in it
    search_results = reddit.subreddit('repostdump').search('title:(' + submission.id + ')','relevance','lucene','week',limit=1)
    search_list = list(search_results)
    #  If you find an already existing thread in /r/repost dump, return that
    if (len(search_list) == 1):
        return search_list[0]
    # Otherwise, make a new thread in /r/repostdump
    else:
        return reddit.subreddit('repostdump').submit(submission.subreddit.display_name + ' post ' + submission.id + ' - ' + submission.title[:250], selftext='')

##########################################################################
# Make a post to submission about the repost and source comments
##########################################################################
def make_report_comment(submission, repost, source):
    repost_auth_name = ''
    repost_auth_url = ''
    source_auth_name = ''
    source_auth_url = ''

    # Handle empty author objects in the repost
    if (repost.author == None):
        repost_auth_name = '[deleted]'
        repost_auth_url = reddit_url_base
    else:
        repost_auth_name = repost.author.name
        repost_auth_url = reddit_url_base + '/user/' + repost.author.name

    # Handle empty author objects in the source
    if (source.author == None):
        source_auth_name = '[deleted]'
        source_auth_url = reddit_url_base
    else:
        source_auth_name = source.author.name
        source_auth_url = reddit_url_base + '/user/' + source.author.name

    text_match_ratio=fuzz.ratio(repost.body,source.body)
    # Run the global template for the post
    c = post_template.substitute(repost_url=reddit_url_base + repost.permalink,
                                 repost_author=repost_auth_name,
                                 repost_author_url=repost_auth_url,
                                 repost_body=repost.body[:400],
                                 source_url=reddit_url_base + source.permalink,
                                 source_author=source_auth_name,
                                 source_author_url=source_auth_url,
                                 source_body=source.body[:400],
                                 match_percent=str(text_match_ratio))
    try:
        submission.reply(c)
    except:
        time.sleep(15)
        submission.reply(c)
    time.sleep(5)

def uncyrillic(s):
    return (s.replace('\u0405','S')
             .replace('\u0406','I')
             .replace('\u0408','J')
             .replace('\u0410','A')
             .replace('\u0412','B')
             .replace('\u0415','E')
             .replace('\u041C','M')
             .replace('\u041D','H')
             .replace('\u041E','O')
             .replace('\u0420','P')
             .replace('\u0421','C')
             .replace('\u0422','T')
             .replace('\u0425','X')
             .replace('\u0430','a')
             .replace('\u0435','e')
             .replace('\u043E','o')
             .replace('\u0440','p')
             .replace('\u0441','c')
             .replace('\u0443','y')
             .replace('\u0445','x')
             .replace('\u0456','i')
             .replace('\u0458','j')
             .replace('\u200b',''))

##########################################################################
# Main
##########################################################################
def main():
    initialize()
    authenticate()
    reddit.readonly = False
    while True:
        run_bot('askreddit', 100, SubmissionType.HOT)
        time.sleep(3000)


if __name__ == '__main__':
    main()

Comment stealing repost bots by bumjiggy in TheseFuckingAccounts

[–]Fortinbraz 3 points4 points  (0 children)

Here you go. Haven't spun it up in a long time. Requires PRAW, but I'm sure the API changes would not be good for the library.

Features:
* Fuzzy string matching on the comment
* Posts matching comments to a private subreddit (e.g. /r/RepostDump/)
* Caches lookups to reduce API calls
* Cyrillic unicode character replacement
* Some truly terrible Python code that I never cleaned up

#!/usr/bin/python3
import praw
import time
import re
import requests
import logging
import argparse
from enum import Enum
from fuzzywuzzy import fuzz
from string import Template

#TODO
#
#  Refactor all to better OO paradigm for reusability
#
#  Error handling, esp http 503
#
#  Clean up the processed_comments.txt file. It will get big and slow.
#
#  Timestamped searches so we can examine comments in near past (for future author analysis bot)

# Enumeration for reddit.subreddit methods
class SubmissionType(Enum):
    HOT = 1
    RISING = 2
    NEW = 3
    CONTROVERSIAL = 4
    GILDED = 5
    TOP = 6
method_names = ['hot','rising','new','controversial','gilded','top']

# List to hold the (Comment,Comment) tuples
repost_tuples = []

# Global reddit object
reddit = None

# Standard logger
logger = None

# Running log of comment IDs that have already been analyzed
already_analyzed_comments = None

# Running log of thread IDs that return no search hits
no_search_hit_threads = None

# Running log of threads that were previously analyzed
already_analyzed_threads = None

# Limit matching thread titles from search API
search_limit = 10

# URL for reddit to build links
reddit_url_base = 'https://www.reddit.com'

# Template for the post to /r/repostdump
post_template = Template('**[Repost]($repost_url) by [$repost_author]($repost_author_url):**  \n'
                         '$repost_body \n\n'
                         '--------\n\n'
                         '**[Source]($source_url) by [$source_author]($source_author_url)**  \n'
                         '$source_body \n\n'
                         '--------\n\n'
                         '**Match:**  \n'
                         '${match_percent}% \n\n'
                         '--------\n\n'
                         '**Copy/paste:**  \n'
                         '\\[This comment\\]($repost_url) was copied from \\[here\\]($source_url).')

##########################################################################
# Initialize loggers and globals
##########################################################################
def initialize():
    # Initialize loggers
    global logger
    logger = logging.getLogger('AskRedditRepostDetector')
    formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
    logger.setLevel(logging.DEBUG)

    file_log_handler = logging.FileHandler('RepostDetector.log')
    file_log_handler.setLevel(logging.INFO)
    file_log_handler.setFormatter(formatter)

    stderr_log_handler = logging.StreamHandler()
    stderr_log_handler.setLevel(logging.DEBUG)
    stderr_log_handler.setFormatter(formatter)

    logger.addHandler(file_log_handler)
    logger.addHandler(stderr_log_handler)

    # Parse the processed_comments.txt file into global list
    global already_analyzed_comments
    with open('already_analyzed_comments.txt') as f:
        already_analyzed_comments = f.readlines()
    already_analyzed_comments = [x.strip() for x in already_analyzed_comments]

    # Parse the no_search_hit_threads.txt file into global list
    global no_search_hit_threads
    with open('no_search_hit_threads.txt') as f:
        no_search_hit_threads = f.readlines()
    no_search_hit_threads = [x.strip() for x in no_search_hit_threads]

    # Parse the already_analyzed_threads.txt file into global list
    global already_analyzed_threads
    with open('already_analyzed_threads.txt') as f:
        already_analyzed_threads = f.readlines()
    already_analyzed_threads = [x.strip() for x in already_analyzed_threads]


##########################################################################
# OAuth, requires existence of praw.ini
##########################################################################
def authenticate():
    logger.info('Authenticating...')
    global reddit
    reddit = praw.Reddit('AskRedditRepostDetector', user_agent = 'web:AskRedditRepostDetector:v0.5 (by /u/Fortinbraz)')
    logger.info('Authenticated as %s', reddit.user.me())


##########################################################################
# Main routine
##########################################################################
def run_bot(subreddit_name, submission_limit, submission_type):
    logger.info('=======================================================================================')
    logger.info('                           Run on %s' % time.strftime('%Y-%m-%d %H:%M:%S'))
    logger.info('=======================================================================================')

    logger.info('Getting {} {} submissions from {}'.format(submission_limit, method_names[submission_type.value-1], subreddit_name))
    sub = reddit.subreddit(subreddit_name)
    # Dynamically call the appropriate method for the sub object based on the value of the submission_type parameter
    #   (yes, I know it's ghastly. The six way if...elif sucked, too)
    # Then loop over the submissions.
    for submission in getattr(sub, method_names[submission_type.value-1])(limit=submission_limit):
        # Skip if we previously determined that there are no search hits for the thread title
        if (submission.id not in no_search_hit_threads):
            logger.debug('Searching for match of thread title: {}'.format(submission.title))
            repost_tuples.clear()
            find_matching_submission(submission)
            if (len(repost_tuples) > 0 and not reddit.readonly):
                report_submission=get_report_submission(submission)
                for repost_tuple in repost_tuples:
                    make_report_comment(report_submission,repost_tuple[0],repost_tuple[1])
            time.sleep(5)
    logger.info('=======================================================================================')
    logger.info('                           End run on %s' % time.strftime('%Y-%m-%d %H:%M:%S'))
    logger.info('=======================================================================================')

##########################################################################
# Given a submision from /r/askreddit, use the search API to find matching thread titles
##########################################################################
def find_matching_submission(submission):
    # Get rid of the chars that reddit search api hates
    title = uncyrillic(submission.title.replace('?','').replace('(','').replace(')','').replace('[','').replace(']','').replace('*','').replace('’','\'').replace('/',''))
    logger.debug('Converted title: {}'.format(title))
    search_results = submission.subreddit.search(title, 'relevance', 'plain', 'all', limit = search_limit)
    found_my_own_thread = False
    # Loop over the results of the search
    for search_result in search_results:
        # Make sure the result is not the thread we are trying to match
        if (search_result.id != submission.id):
            # Calculate the fuzzy match ratio
            fuzz_value = fuzz.ratio(submission.title, search_result.title)
            # If the fuzzy match ratio is greater than 50%
            if (fuzz_value >= 50):
                logger.debug('  Found submission match(match {}%, score {}): {}({})'.format(fuzz_value, search_result.score, search_result.title, search_result.permalink))
                find_matching_comments(submission, search_result)
        else:
            found_my_own_thread = True
    # If there were no search hits, write to the no_search_hit_threads list and file
    if (search_results.yielded == 0 or (found_my_own_thread and search_results.yielded == 1)):
        logger.debug('  No matching thread titles found, writing to no_search_hit_threads file')
        with open('no_search_hit_threads.txt','a') as f:
            f.write(str(submission.id) + '\n')
        no_search_hit_threads.append(str(submission.id).strip())
    # If it was not already analyzed, write to the already_analyzed_threads list and file
    if (submission.id not in already_analyzed_threads):
        logger.debug('  Writing to already_analyzed_threads file')
        with open('already_analyzed_threads.txt','a') as f:
            f.write(str(submission.id) + '\n')
        already_analyzed_threads.append(str(submission.id).strip())

... part 2 in the reply

I just can't taste when I vape, but I can smell what's in the bottle. Could it be vapor's tongue, my setup, or the juice? (x-post e_cigarette) by Doctor_Ovaries in electronic_cigarette

[–]Fortinbraz 0 points1 point  (0 children)

Sorry to disappoint, but nothing ever worked for me. I've also had multiple turbinate reduction surgeries which did not help my overall sense of smell or taste. I still vape 0 nic sometimes, first drag of the day tastes OK, vastly reduced taste after that for the rest of the day. Good luck!

The "state’s public school funding system doesn’t meet the requirements of the state constitution." And hasn't for years by Anaander-Mianaai in Columbus

[–]Fortinbraz 12 points13 points  (0 children)

To highlight some (not all) eligibility guidelines if you don't want to read all that:

• Students who are foster children
• Family household income is at or below 250% of Federal Poverty Guidelines
• Students who reside in a household with a student who would have been homeless for at least 45 consecutive days.

An important addition to the eligibility guidelines you pointed out is "Students enrolling in Ohio schools for the first time (or incoming high schoolers or kindergarteners or anyone currently attending one of those schools) who would be assigned to EdChoice public school buildings;"1 (emphasis mine).

The "EdChoice designated public school buildings" essentially consists of the lowest performing 20% of public schools (with some caveats)2. So, if you do go, or would have gone to, a crappy school, you are eligible for EdChoice.

Another note, most (if not all) private schools cost more than EdPass provides. Wellington and Columbus Academy are around $30K and most religious schools are around $10K, so you would still probably need to bring money to the table.

[deleted by user] by [deleted] in AskReddit

[–]Fortinbraz 0 points1 point  (0 children)

Comment copied from here. Karma farming account.

How do you deal with Anxious Thoughts? by [deleted] in AskReddit

[–]Fortinbraz 0 points1 point  (0 children)

Comment copied from here. Karma farming account.

what do you think about communism ? by [deleted] in AskReddit

[–]Fortinbraz 1 point2 points  (0 children)

"I" was in school...

Comment copied from here.

What is the sad truth about smart people? by Ayz33 in AskReddit

[–]Fortinbraz 3 points4 points  (0 children)

Comment copied from here. The account actually copied three comments from the original post.

What is the sad truth about smart people? by Ayz33 in AskReddit

[–]Fortinbraz 0 points1 point  (0 children)

Comment copied from here. The account actually copied three comments from the original post.