glin-profanity

Verified·Scanned 2/18/2026

glin-profanity is a client-side profanity detection library that offers leetspeak detection, Unicode normalization, and optional ML-powered toxicity checks. No security-relevant behaviors detected.

from clawhub.ai·v4630fac·4.5 KB·0 installs
Scanned from 1.0.0 at 4630fac · Transparency log ↗
$ vett add clawhub.ai/thegdsks/glin-profanity

Glin Profanity - Content Moderation Library

Profanity detection library that catches evasion attempts like leetspeak (f4ck, sh1t), Unicode tricks (Cyrillic lookalikes), and obfuscated text.

Installation

# JavaScript/TypeScript
npm install glin-profanity

# Python
pip install glin-profanity

Quick Usage

JavaScript/TypeScript

import { checkProfanity, Filter } from 'glin-profanity';

// Simple check
const result = checkProfanity("Your text here", {
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

result.containsProfanity  // boolean
result.profaneWords       // array of detected words
result.processedText      // censored version

// With Filter instance
const filter = new Filter({
  replaceWith: '***',
  detectLeetspeak: true,
  normalizeUnicode: true
});

filter.isProfane("text")           // boolean
filter.checkProfanity("text")      // full result object

Python

from glin_profanity import Filter

filter = Filter({
    "languages": ["english"],
    "replace_with": "***",
    "detect_leetspeak": True
})

filter.is_profane("text")           # True/False
filter.check_profanity("text")      # Full result dict

React Hook

import { useProfanityChecker } from 'glin-profanity';

function ChatInput() {
  const { result, checkText } = useProfanityChecker({
    detectLeetspeak: true
  });

  return (
    <input onChange={(e) => checkText(e.target.value)} />
  );
}

Key Features

FeatureDescription
Leetspeak detectionf4ck, sh1t, @$$ patterns
Unicode normalizationCyrillic fսckfuck
24 languagesIncluding Arabic, Chinese, Russian, Hindi
Context whitelistsMedical, gaming, technical domains
ML integrationOptional TensorFlow.js toxicity detection
Result cachingLRU cache for performance

Configuration Options

const filter = new Filter({
  languages: ['english', 'spanish'],     // Languages to check
  detectLeetspeak: true,                 // Catch f4ck, sh1t
  leetspeakLevel: 'moderate',            // basic | moderate | aggressive
  normalizeUnicode: true,                // Catch Unicode tricks
  replaceWith: '*',                      // Replacement character
  preserveFirstLetter: false,            // f*** vs ****
  customWords: ['badword'],              // Add custom words
  ignoreWords: ['hell'],                 // Whitelist words
  cacheSize: 1000                        // LRU cache entries
});

Context-Aware Analysis

import { analyzeContext } from 'glin-profanity';

const result = analyzeContext("The patient has a breast tumor", {
  domain: 'medical',        // medical | gaming | technical | educational
  contextWindow: 3,         // Words around match to consider
  confidenceThreshold: 0.7  // Minimum confidence to flag
});

Batch Processing

import { batchCheck } from 'glin-profanity';

const results = batchCheck([
  "Comment 1",
  "Comment 2",
  "Comment 3"
], { returnOnlyFlagged: true });

ML-Powered Detection (Optional)

import { loadToxicityModel, checkToxicity } from 'glin-profanity/ml';

await loadToxicityModel({ threshold: 0.9 });

const result = await checkToxicity("You're the worst");
// { toxic: true, categories: { toxicity: 0.92, insult: 0.87 } }

Common Patterns

Chat/Comment Moderation

const filter = new Filter({
  detectLeetspeak: true,
  normalizeUnicode: true,
  languages: ['english']
});

bot.on('message', (msg) => {
  if (filter.isProfane(msg.text)) {
    deleteMessage(msg);
    warnUser(msg.author);
  }
});

Content Validation Before Publish

const result = filter.checkProfanity(userContent);

if (result.containsProfanity) {
  return {
    valid: false,
    issues: result.profaneWords,
    suggestion: result.processedText  // Censored version
  };
}

Resources