Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#2 updated the error in readtheDocs #439

Merged
merged 1 commit into from
Jun 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 4 additions & 8 deletions PAMI/correlatedPattern/basic/CoMine.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,9 +115,7 @@ def traverse(self) -> Tuple[List[int], int]:

class CoMine(_ab._correlatedPatterns):
"""

About this algorithm
====================
**About this algorithm**

:**Description**: CoMine is one of the fundamental algorithm to discover correlated patterns in a transactional database. It is based on the traditional FP-Growth algorithm. This algorithm uses depth-first search technique to find all correlated patterns in a transactional database.

Expand All @@ -144,8 +142,7 @@ class CoMine(_ab._correlatedPatterns):
- **itemSetBuffer** (*list*) -- *it represents the store the items in mining.*
- **maxPatternLength** (*int*) -- *it represents the constraint for pattern length.*

Execution methods
=================
**Execution methods**

**Terminal command**

Expand Down Expand Up @@ -197,10 +194,9 @@ class CoMine(_ab._correlatedPatterns):

print("Total ExecutionTime in seconds:", run)

Credits
=======
**Credits**

The complete program was written by B.Sai Chitra under the supervision of Professor Rage Uday Kiran.
The complete program was written by B.Sai Chitra and revised by Tarun Sreepada under the supervision of Professor Rage Uday Kiran.

"""

Expand Down
11 changes: 4 additions & 7 deletions PAMI/correlatedPattern/basic/CoMinePlus.py
Original file line number Diff line number Diff line change
Expand Up @@ -115,8 +115,7 @@ def traverse(self) -> Tuple[List[int], int]:

class CoMine(_ab._correlatedPatterns):
"""
About this algorithm
====================
**About this algorithm**

:**Description**: CoMinePlus is one of the fundamental algorithm to discover correlated patterns in a transactional database. It is based on the traditional FP-Growth algorithm. This algorithm uses depth-first search technique to find all correlated patterns in a transactional database.

Expand All @@ -143,8 +142,7 @@ class CoMine(_ab._correlatedPatterns):
- **itemSetBuffer** (*list*) -- *it represents the store the items in mining.*
- **maxPatternLength** (*int*) -- *it represents the constraint for pattern length.*

Execution methods
=================
**Execution methods**

**Terminal command**

Expand Down Expand Up @@ -196,10 +194,9 @@ class CoMine(_ab._correlatedPatterns):

print("Total ExecutionTime in seconds:", run)

Credits
=======
**Credits**

The complete program was written by B.Sai Chitra and revised by Tarun Sreepads under the supervision of Professor Rage Uday Kiran.
The complete program was written by B.Sai Chitra and revised by Tarun Sreepada under the supervision of Professor Rage Uday Kiran.

"""

Expand Down
13 changes: 5 additions & 8 deletions PAMI/frequentPattern/basic/Apriori.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
#
# **Importing this algorithm into a python program**
#
# import PAMI1.frequentPattern.basic.Apriori as alg
# import PAMI.frequentPattern.basic.Apriori as alg
#
# iFile = 'sampleDB.txt'
#
Expand Down Expand Up @@ -58,8 +58,7 @@

class Apriori(_ab._frequentPatterns):
"""
About this algorithm
====================
**About this algorithm**

:**Description**: Apriori is one of the fundamental algorithm to discover frequent patterns in a transactional database. This program employs apriori property (or downward closure property) to reduce the search space effectively. This algorithm employs breadth-first search technique to find the complete set of frequent patterns in a transactional database.

Expand All @@ -79,8 +78,7 @@ class Apriori(_ab._frequentPatterns):
- **Database** (*list*) -- *To store the transactions of a database in list.*


Execution methods
=================
**Execution methods**

**Terminal command**

Expand All @@ -101,7 +99,7 @@ class Apriori(_ab._frequentPatterns):

.. code-block:: python

import PAMI1.frequentPattern.basic.Apriori as alg
import PAMI.frequentPattern.basic.Apriori as alg

iFile = 'sampleDB.txt'

Expand Down Expand Up @@ -132,8 +130,7 @@ class Apriori(_ab._frequentPatterns):
print("Total ExecutionTime in seconds:", run)


Credits
=======
**Credits**

The complete program was written by P. Likhitha and revised by Tarun Sreepada under the supervision of Professor Rage Uday Kiran.

Expand Down
98 changes: 40 additions & 58 deletions PAMI/frequentPattern/basic/Aprioribitset.py
Original file line number Diff line number Diff line change
@@ -1,10 +1,13 @@
# AprioriBitset is one of the fundamental algorithm to discover frequent patterns in a transactional database.
#
# **Importing this algorithm into a python program**
# ---------------------------------------------------------
#
# import PAMI.frequentPattern.basic.AprioriBitset as alg
#
# iFile = 'sampleDB.txt'
#
# minSup = 10 # can also be specified between 0 and 1
#
# obj = alg.AprioriBitset(iFile, minSup)
#
# obj.mine()
Expand Down Expand Up @@ -54,43 +57,30 @@

class Aprioribitset(_ab._frequentPatterns):
"""
:Description: AprioriBitset is one of the fundamental algorithm to discover frequent patterns in a transactional database.

:Reference: Mohammed Javeed Zaki: Scalable Algorithms for Association Mining. IEEE Trans. Knowl. Data Eng. 12(3):
372-390 (2000), https://ieeexplore.ieee.org/document/846291

:param iFile: str :
Name of the Input file to mine complete set of frequent patterns
:param oFile: str :
Name of the output file to store complete set of frequent patterns
:param minSup: int or float or str :
The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count.
:param sep: str :
This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.
**About this algorithm**

:Attributes:
:**Description**: AprioriBitset is one of the fundamental algorithm to discover frequent patterns in a transactional database.

startTime : float
To record the start time of the mining process
:**Reference**: Mohammed Javeed Zaki: Scalable Algorithms for Association Mining. IEEE Trans. Knowl. Data Eng. 12(3):
372-390 (2000), https://ieeexplore.ieee.org/document/846291

endTime : float
To record the completion time of the mining process
:**Parameters**: - **iFile** (*str or URL or dataFrame*) -- *Name of the Input file to mine complete set of frequent patterns.*
- **oFile** (*str*) -- *Name of the output file to store complete set of frequent patterns.*
- **minSup** (*int or float or str*) -- *The user can specify minSup either in count or proportion of database size. If the program detects the data type of minSup is integer, then it treats minSup is expressed in count. Otherwise, it will be treated as float.*
- **sep** (*str*) -- *This variable is used to distinguish items from one another in a transaction. The default seperator is tab space. However, the users can override their default separator.*

finalPatterns : dict
Storing the complete set of patterns in a dictionary variable
:**Attributes**: - **startTime** (*float*) -- *To record the start time of the mining process.*
- **endTime** (*float*) -- *To record the completion time of the mining process.*
- **finalPatterns** (*dict*) -- *Storing the complete set of patterns in a dictionary variable.*
- **memoryUSS** (*float*) -- *To store the total amount of USS memory consumed by the program.*
- **memoryRSS** (*float*) -- *To store the total amount of RSS memory consumed by the program.*
- **Database** (*list*) -- *To store the transactions of a database in list.*

memoryUSS : float
To store the total amount of USS memory consumed by the program

memoryRSS : float
To store the total amount of RSS memory consumed by the program

Database : list
To store the transactions of a database in list
**Execution methods**


**Methods to execute code on terminal**
------------------------------------------
**Terminal command**

.. code-block:: console

Expand All @@ -102,22 +92,26 @@ class Aprioribitset(_ab._frequentPatterns):

(.venv) $ python3 AprioriBitset.py sampleDB.txt patterns.txt 10.0

.. note:: minSup will be considered in percentage of database transactions
.. note:: minSup can be specified in support count or a value between 0 and 1.


**Calling from a python program**

**Importing this algorithm into a python program**
---------------------------------------------------------
.. code-block:: python

import PAMI.frequentPattern.basic.AprioriBitset as alg
import PAMI.frequentPattern.basic.Aprioribitset as alg

iFile = 'sampleDB.txt'

obj = alg.AprioriBitset(iFile, minSup)
minSup = 10 # can also be specified between 0 and 1

obj = alg.Aprioribitset(iFile, minSup)

obj.mine()

frequentPatterns = obj.getPatterns()
frequentPattern = obj.getPatterns()

print("Total number of Frequent Patterns:", len(frequentPatterns))
print("Total number of Frequent Patterns:", len(frequentPattern))

obj.save(oFile)

Expand All @@ -135,10 +129,10 @@ class Aprioribitset(_ab._frequentPatterns):

print("Total ExecutionTime in seconds:", run)

**Credits:**
-------------------

The complete program was written by Yudai Masu under the supervision of Professor Rage Uday Kiran.
**Credits**

The complete program was written by Yudai Masu and revised by Tarun Sreepada under the supervision of Professor Rage Uday Kiran.

"""

Expand All @@ -160,11 +154,8 @@ def _convert(self, value):
To convert the user specified minSup value

:param value: user specified minSup value

:type value: int

:return: converted type

:rtype: int or float or string
"""
if type(value) is int:
Expand Down Expand Up @@ -216,28 +207,19 @@ def _creatingItemSets(self):
print("File Not Found")
self._minSup = self._convert(self._minSup)

@deprecated(
"It is recommended to use 'mine()' instead of 'startMine()' for mining process. Starting from January 2025, 'startMine()' will be completely terminated.")
@deprecated("It is recommended to use 'mine()' instead of 'startMine()' for mining process. Starting from January 2025, 'startMine()' will be completely terminated.")

def startMine(self):
"""
Frequent pattern mining process will start from here
We start with the scanning the itemSets and store the bitsets respectively.
We form the combinations of single items and check with minSup condition to check the frequency of patterns
"""
self.mine()

def _bitPacker(self, data, maxIndex):
"""
It takes the data and maxIndex as input and generates integer as output value.

:param data: it takes data as input.

:type data: int or float

:param maxIndex: It converts the data into bits By taking the maxIndex value as condition.

:type maxIndex: int

"""
packed_bits = 0
for i in data:
Expand All @@ -248,7 +230,6 @@ def _bitPacker(self, data, maxIndex):
def mine(self) -> None:
"""
Frequent pattern mining process will start from here
# Bitset implementation
"""
self._startTime = _ab._time.time()

Expand Down Expand Up @@ -307,6 +288,7 @@ def mine(self) -> None:
def getMemoryUSS(self):
"""
Total amount of USS memory consumed by the mining process will be retrieved from this function

:return: returning USS memory consumed by the mining process
:rtype: float
"""
Expand All @@ -316,6 +298,7 @@ def getMemoryUSS(self):
def getMemoryRSS(self):
"""
Total amount of RSS memory consumed by the mining process will be retrieved from this function

:return: returning RSS memory consumed by the mining process
:rtype: float
"""
Expand All @@ -325,6 +308,7 @@ def getMemoryRSS(self):
def getRuntime(self):
"""
Calculating the total amount of runtime taken by the mining process

:return: returning total amount of runtime taken by the mining process
:rtype: float
"""
Expand All @@ -333,11 +317,9 @@ def getRuntime(self):

def getPatternsAsDataFrame(self) -> _ab._pd.DataFrame:
"""

Storing final frequent patterns in a dataframe

:return: returning frequent patterns in a dataframe

:rtype: pd.DataFrame

"""
Expand All @@ -358,7 +340,6 @@ def getPatternsAsDataFrame(self) -> _ab._pd.DataFrame:

def save(self, outFile: str, seperator = "\t" ) -> None:
"""

Complete set of frequent patterns will be loaded in to an output file

:param outFile: name of the output file
Expand All @@ -379,6 +360,7 @@ def save(self, outFile: str, seperator = "\t" ) -> None:
def getPatterns(self):
"""
Function to send the set of frequent patterns after completion of the mining process

:return: returning frequent patterns
:rtype: dict
"""
Expand Down
9 changes: 3 additions & 6 deletions PAMI/frequentPattern/basic/ECLAT.py
Original file line number Diff line number Diff line change
Expand Up @@ -56,8 +56,7 @@

class ECLAT(_ab._frequentPatterns):
"""
About this algorithm
====================
**About this algorithm**

:**Description**: ECLAT is one of the fundamental algorithm to discover frequent patterns in a transactional database.

Expand All @@ -76,8 +75,7 @@ class ECLAT(_ab._frequentPatterns):
- **memoryRSS** (*float*) -- *To store the total amount of RSS memory consumed by the program.*
- **Database** (*list*) -- *To store the transactions of a database in list.*

Execution methods
=================
**Execution methods**

**Terminal command**

Expand Down Expand Up @@ -129,8 +127,7 @@ class ECLAT(_ab._frequentPatterns):
print("Total ExecutionTime in seconds:", run)


Credits:
========
**Credits:**

The complete program was written by Kundai and revised by Tarun Sreepada under the supervision of Professor Rage Uday Kiran.

Expand Down
Loading